Test Report: KVM_Linux_containerd 17086

                    
                      9a32fbe416941fe3be1e8bb0a72042cc4c15bbaa:2023-08-23:30696
                    
                

Test fail (2/302)

Order failed test Duration
221 TestRunningBinaryUpgrade 909.54
228 TestStoppedBinaryUpgrade/Upgrade 1019.49
x
+
TestRunningBinaryUpgrade (909.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.965652295.exe start -p running-upgrade-502460 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.965652295.exe start -p running-upgrade-502460 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m6.784359759s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (12m56.705572979s)

                                                
                                                
-- stdout --
	* [running-upgrade-502460] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-502460 in cluster running-upgrade-502460
	* Updating the running kvm2 "running-upgrade-502460" VM ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 19:01:59.559009   46108 out.go:296] Setting OutFile to fd 1 ...
	I0823 19:01:59.559168   46108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 19:01:59.559179   46108 out.go:309] Setting ErrFile to fd 2...
	I0823 19:01:59.559187   46108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 19:01:59.559473   46108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 19:01:59.560234   46108 out.go:303] Setting JSON to false
	I0823 19:01:59.561552   46108 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6264,"bootTime":1692811056,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 19:01:59.561630   46108 start.go:138] virtualization: kvm guest
	I0823 19:01:59.564489   46108 out.go:177] * [running-upgrade-502460] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 19:01:59.566579   46108 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 19:01:59.566602   46108 notify.go:220] Checking for updates...
	I0823 19:01:59.568185   46108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 19:01:59.569745   46108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 19:01:59.571279   46108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 19:01:59.572661   46108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 19:01:59.573977   46108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 19:01:59.576548   46108 config.go:182] Loaded profile config "running-upgrade-502460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0823 19:01:59.578399   46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:01:59.578458   46108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:01:59.593536   46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0823 19:01:59.593988   46108 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:01:59.594600   46108 main.go:141] libmachine: Using API Version  1
	I0823 19:01:59.594630   46108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:01:59.594973   46108 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:01:59.595142   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:01:59.597035   46108 out.go:177] * Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	I0823 19:01:59.598452   46108 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 19:01:59.598879   46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:01:59.598931   46108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:01:59.613924   46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0823 19:01:59.614273   46108 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:01:59.614877   46108 main.go:141] libmachine: Using API Version  1
	I0823 19:01:59.614917   46108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:01:59.615252   46108 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:01:59.615454   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:01:59.656519   46108 out.go:177] * Using the kvm2 driver based on existing profile
	I0823 19:01:59.657958   46108 start.go:298] selected driver: kvm2
	I0823 19:01:59.657974   46108 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrade
-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 19:01:59.658091   46108 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 19:01:59.659010   46108 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 19:01:59.659139   46108 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0823 19:01:59.674948   46108 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0823 19:01:59.675250   46108 cni.go:84] Creating CNI manager for ""
	I0823 19:01:59.675264   46108 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 19:01:59.675273   46108 start_flags.go:319] config:
	{Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrade-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
	I0823 19:01:59.675424   46108 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 19:01:59.677118   46108 out.go:177] * Starting control plane node running-upgrade-502460 in cluster running-upgrade-502460
	I0823 19:01:59.678407   46108 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0823 19:01:59.678447   46108 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
	I0823 19:01:59.678469   46108 cache.go:57] Caching tarball of preloaded images
	I0823 19:01:59.678570   46108 preload.go:174] Found /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0823 19:01:59.678587   46108 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0823 19:01:59.678735   46108 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/config.json ...
	I0823 19:01:59.678913   46108 start.go:365] acquiring machines lock for running-upgrade-502460: {Name:mk1833667e1e194459e10edb6eaddedbcc5a0864 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 19:02:09.126694   46108 start.go:369] acquired machines lock for "running-upgrade-502460" in 9.447741547s
	I0823 19:02:09.126754   46108 start.go:96] Skipping create...Using existing machine configuration
	I0823 19:02:09.126766   46108 fix.go:54] fixHost starting: 
	I0823 19:02:09.127167   46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:02:09.127200   46108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:02:09.146641   46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0823 19:02:09.147102   46108 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:02:09.147642   46108 main.go:141] libmachine: Using API Version  1
	I0823 19:02:09.147665   46108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:02:09.148028   46108 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:02:09.148187   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:09.148320   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetState
	I0823 19:02:09.149968   46108 fix.go:102] recreateIfNeeded on running-upgrade-502460: state=Running err=<nil>
	W0823 19:02:09.150005   46108 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 19:02:09.151742   46108 out.go:177] * Updating the running kvm2 "running-upgrade-502460" VM ...
	I0823 19:02:09.153376   46108 machine.go:88] provisioning docker machine ...
	I0823 19:02:09.153398   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:09.153597   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
	I0823 19:02:09.153762   46108 buildroot.go:166] provisioning hostname "running-upgrade-502460"
	I0823 19:02:09.153785   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
	I0823 19:02:09.153937   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.156271   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.156684   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.156722   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.156859   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:09.157024   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.157170   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.157281   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:09.157436   46108 main.go:141] libmachine: Using SSH client type: native
	I0823 19:02:09.158184   46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0823 19:02:09.158206   46108 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-502460 && echo "running-upgrade-502460" | sudo tee /etc/hostname
	I0823 19:02:09.280690   46108 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-502460
	
	I0823 19:02:09.280711   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.283814   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.284222   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.284254   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.284446   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:09.284618   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.284756   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.284871   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:09.285058   46108 main.go:141] libmachine: Using SSH client type: native
	I0823 19:02:09.285727   46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0823 19:02:09.285755   46108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-502460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-502460/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-502460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 19:02:09.403737   46108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 19:02:09.403759   46108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17086-11104/.minikube CaCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17086-11104/.minikube}
	I0823 19:02:09.403798   46108 buildroot.go:174] setting up certificates
	I0823 19:02:09.403812   46108 provision.go:83] configureAuth start
	I0823 19:02:09.403825   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
	I0823 19:02:09.404148   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
	I0823 19:02:09.407289   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.407688   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.407718   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.407997   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.410663   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.411103   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.411135   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.411273   46108 provision.go:138] copyHostCerts
	I0823 19:02:09.411330   46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem, removing ...
	I0823 19:02:09.411349   46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem
	I0823 19:02:09.411413   46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem (1078 bytes)
	I0823 19:02:09.411513   46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem, removing ...
	I0823 19:02:09.411523   46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem
	I0823 19:02:09.411553   46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem (1123 bytes)
	I0823 19:02:09.411629   46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem, removing ...
	I0823 19:02:09.411641   46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem
	I0823 19:02:09.411665   46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem (1675 bytes)
	I0823 19:02:09.411722   46108 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-502460 san=[192.168.61.47 192.168.61.47 localhost 127.0.0.1 minikube running-upgrade-502460]
	I0823 19:02:09.571903   46108 provision.go:172] copyRemoteCerts
	I0823 19:02:09.571959   46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 19:02:09.571981   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.575284   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.575729   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.575777   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.575989   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:09.576182   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.576361   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:09.576514   46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
	I0823 19:02:09.677496   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0823 19:02:09.699884   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0823 19:02:09.722022   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 19:02:09.740667   46108 provision.go:86] duration metric: configureAuth took 336.842286ms
	I0823 19:02:09.740693   46108 buildroot.go:189] setting minikube options for container-runtime
	I0823 19:02:09.740926   46108 config.go:182] Loaded profile config "running-upgrade-502460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0823 19:02:09.740941   46108 machine.go:91] provisioned docker machine in 587.553047ms
	I0823 19:02:09.740949   46108 start.go:300] post-start starting for "running-upgrade-502460" (driver="kvm2")
	I0823 19:02:09.740964   46108 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 19:02:09.740993   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:09.741339   46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 19:02:09.741366   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.744605   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.745027   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.745072   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.745341   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:09.745557   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.745755   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:09.745918   46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
	I0823 19:02:09.839563   46108 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 19:02:09.844913   46108 info.go:137] Remote host: Buildroot 2020.02.12
	I0823 19:02:09.844941   46108 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/addons for local assets ...
	I0823 19:02:09.845035   46108 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/files for local assets ...
	I0823 19:02:09.845134   46108 filesync.go:149] local asset: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0823 19:02:09.845250   46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0823 19:02:09.853713   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0823 19:02:09.876199   46108 start.go:303] post-start completed in 135.236199ms
	I0823 19:02:09.876226   46108 fix.go:56] fixHost completed within 749.461588ms
	I0823 19:02:09.876252   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:09.878889   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.879326   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:09.879365   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:09.879585   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:09.879761   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.879970   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:09.880175   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:09.880407   46108 main.go:141] libmachine: Using SSH client type: native
	I0823 19:02:09.880792   46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0823 19:02:09.880806   46108 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0823 19:02:10.002434   46108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692817329.999811244
	
	I0823 19:02:10.002457   46108 fix.go:206] guest clock: 1692817329.999811244
	I0823 19:02:10.002467   46108 fix.go:219] Guest: 2023-08-23 19:02:09.999811244 +0000 UTC Remote: 2023-08-23 19:02:09.876231253 +0000 UTC m=+10.361617869 (delta=123.579991ms)
	I0823 19:02:10.002514   46108 fix.go:190] guest clock delta is within tolerance: 123.579991ms
	I0823 19:02:10.002524   46108 start.go:83] releasing machines lock for "running-upgrade-502460", held for 875.807589ms
	I0823 19:02:10.002553   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:10.002822   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
	I0823 19:02:10.005630   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.006011   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:10.006066   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.006256   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:10.006804   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:10.006982   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
	I0823 19:02:10.007076   46108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 19:02:10.007136   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:10.007186   46108 ssh_runner.go:195] Run: cat /version.json
	I0823 19:02:10.007215   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
	I0823 19:02:10.010343   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.010472   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.010988   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:10.011043   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.011079   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:10.011099   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:10.011228   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:10.011426   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:10.011468   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
	I0823 19:02:10.011569   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
	I0823 19:02:10.011688   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:10.011697   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
	I0823 19:02:10.011896   46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
	I0823 19:02:10.012646   46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
	W0823 19:02:10.121853   46108 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0823 19:02:10.121928   46108 ssh_runner.go:195] Run: systemctl --version
	I0823 19:02:10.127729   46108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 19:02:10.133774   46108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 19:02:10.133855   46108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 19:02:10.152518   46108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 19:02:10.152557   46108 start.go:466] detecting cgroup driver to use...
	I0823 19:02:10.152660   46108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 19:02:10.177658   46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 19:02:10.192068   46108 docker.go:196] disabling cri-docker service (if available) ...
	I0823 19:02:10.192129   46108 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0823 19:02:10.201976   46108 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0823 19:02:10.231584   46108 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0823 19:02:10.248997   46108 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0823 19:02:10.249116   46108 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0823 19:02:10.490697   46108 docker.go:212] disabling docker service ...
	I0823 19:02:10.490764   46108 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0823 19:02:10.504028   46108 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0823 19:02:10.515835   46108 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0823 19:02:10.707732   46108 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0823 19:02:10.930920   46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0823 19:02:10.958665   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 19:02:10.984997   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0823 19:02:11.001419   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 19:02:11.009827   46108 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 19:02:11.009882   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 19:02:11.018171   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 19:02:11.025065   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 19:02:11.032516   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 19:02:11.040957   46108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 19:02:11.051305   46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 19:02:11.058329   46108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 19:02:11.064752   46108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 19:02:11.072395   46108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 19:02:11.231303   46108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 19:02:11.267644   46108 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0823 19:02:11.267731   46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0823 19:02:11.275085   46108 retry.go:31] will retry after 897.66326ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0823 19:02:12.172970   46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0823 19:02:12.178619   46108 retry.go:31] will retry after 1.167959927s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0823 19:02:13.346903   46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0823 19:02:13.354663   46108 start.go:534] Will wait 60s for crictl version
	I0823 19:02:13.354728   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:13.359400   46108 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0823 19:02:13.382614   46108 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0823 19:02:13.382683   46108 ssh_runner.go:195] Run: containerd --version
	I0823 19:02:13.425358   46108 ssh_runner.go:195] Run: containerd --version
	I0823 19:02:13.462177   46108 out.go:177] * Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	I0823 19:02:13.463371   46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
	I0823 19:02:13.466725   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:13.467124   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
	I0823 19:02:13.467163   46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
	I0823 19:02:13.467522   46108 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0823 19:02:13.473273   46108 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0823 19:02:13.473348   46108 ssh_runner.go:195] Run: sudo crictl images --output json
	I0823 19:02:13.498679   46108 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
	I0823 19:02:13.498761   46108 ssh_runner.go:195] Run: which lz4
	I0823 19:02:13.504924   46108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0823 19:02:13.511041   46108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0823 19:02:13.511077   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (483579245 bytes)
	I0823 19:02:15.616813   46108 containerd.go:547] Took 2.111927 seconds to copy over tarball
	I0823 19:02:15.616882   46108 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 19:02:19.667461   46108 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.05054972s)
	I0823 19:02:19.667493   46108 containerd.go:554] Took 4.050658 seconds to extract the tarball
	I0823 19:02:19.667501   46108 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 19:02:19.708329   46108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 19:02:19.841945   46108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 19:02:20.815014   46108 ssh_runner.go:195] Run: sudo crictl images --output json
	I0823 19:02:21.838061   46108 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.023010088s)
	I0823 19:02:21.838208   46108 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
	I0823 19:02:21.838222   46108 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.21.2 registry.k8s.io/kube-controller-manager:v1.21.2 registry.k8s.io/kube-scheduler:v1.21.2 registry.k8s.io/kube-proxy:v1.21.2 registry.k8s.io/pause:3.4.1 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0823 19:02:21.838291   46108 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 19:02:21.838321   46108 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 19:02:21.838344   46108 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 19:02:21.838354   46108 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0823 19:02:21.838504   46108 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 19:02:21.838531   46108 image.go:134] retrieving image: registry.k8s.io/pause:3.4.1
	I0823 19:02:21.838541   46108 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 19:02:21.838557   46108 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 19:02:21.839915   46108 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 19:02:21.839916   46108 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 19:02:21.839928   46108 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0823 19:02:21.840053   46108 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 19:02:21.840301   46108 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.0: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 19:02:21.840820   46108 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 19:02:21.841834   46108 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 19:02:21.841854   46108 image.go:177] daemon lookup for registry.k8s.io/pause:3.4.1: Error response from daemon: No such image: registry.k8s.io/pause:3.4.1
	I0823 19:02:22.002403   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.21.2"
	I0823 19:02:22.012881   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.21.2"
	I0823 19:02:22.028652   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0"
	I0823 19:02:22.039212   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2"
	I0823 19:02:22.043199   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0"
	I0823 19:02:22.074439   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1"
	I0823 19:02:22.088783   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2"
	I0823 19:02:22.349284   46108 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.21.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.21.2" does not exist at hash "106ff58d4308243e0042862435f5a0b14dd332d8151f17a739046c7df33c7ae6" in container runtime
	I0823 19:02:22.349336   46108 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 19:02:22.349384   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.905469   46108 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.21.2" needs transfer: "registry.k8s.io/kube-proxy:v1.21.2" does not exist at hash "a6ebd1c1ad9810239a2885494ae92e0230224bafcb39ef1433c6cb49a98b0dfe" in container runtime
	I0823 19:02:22.905519   46108 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 19:02:22.905595   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.145643   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0": (1.11695521s)
	I0823 19:02:23.145695   46108 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0823 19:02:23.145722   46108 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0823 19:02:23.145766   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.234349   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2": (1.195103468s)
	I0823 19:02:23.234395   46108 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.21.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.21.2" does not exist at hash "f917b8c8f55b7fd9bcd895920e2c16fb3e3770c94eba844262a57a55c6187d86" in container runtime
	I0823 19:02:23.234425   46108 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 19:02:23.234475   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.331398   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0": (1.288163527s)
	I0823 19:02:23.331426   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1": (1.256937619s)
	I0823 19:02:23.331449   46108 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.0" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.0" does not exist at hash "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899" in container runtime
	I0823 19:02:23.331476   46108 cache_images.go:116] "registry.k8s.io/pause:3.4.1" needs transfer: "registry.k8s.io/pause:3.4.1" does not exist at hash "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253" in container runtime
	I0823 19:02:23.331483   46108 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 19:02:23.331508   46108 cri.go:218] Removing image: registry.k8s.io/pause:3.4.1
	I0823 19:02:23.331531   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.331586   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.379526   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2": (1.290699202s)
	I0823 19:02:23.379554   46108 ssh_runner.go:235] Completed: which crictl: (1.030147825s)
	I0823 19:02:23.379581   46108 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.21.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.21.2" does not exist at hash "ae24db9aa2cc0d8572cc5c1c0eda9f40e0a8170cecefe742a5d7f1d4170f4eb1" in container runtime
	I0823 19:02:23.379615   46108 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 19:02:23.379620   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-apiserver:v1.21.2
	I0823 19:02:23.379659   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:23.379659   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0823 19:02:23.379691   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-scheduler:v1.21.2
	I0823 19:02:23.379621   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-proxy:v1.21.2
	I0823 19:02:23.379731   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/pause:3.4.1
	I0823 19:02:23.379763   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.0
	I0823 19:02:23.469936   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0
	I0823 19:02:23.470000   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.21.2
	I0823 19:02:23.470024   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.21.2
	I0823 19:02:23.470086   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 19:02:23.470118   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.4.1
	I0823 19:02:23.470188   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.21.2
	I0823 19:02:23.470219   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0823 19:02:23.501599   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.21.2
	I0823 19:02:23.753243   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0823 19:02:24.194118   46108 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0823 19:02:24.194172   46108 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 19:02:24.194219   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:24.203813   46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 19:02:24.415982   46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0823 19:02:24.416108   46108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0823 19:02:24.435470   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0823 19:02:24.654977   46108 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0823 19:02:24.655038   46108 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0823 19:02:26.165937   46108 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.510873877s)
	I0823 19:02:26.165964   46108 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0823 19:02:26.166002   46108 cache_images.go:92] LoadImages completed in 4.327770747s
	W0823 19:02:26.166072   46108 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0: no such file or directory
	I0823 19:02:26.166136   46108 ssh_runner.go:195] Run: sudo crictl info
	I0823 19:02:26.230797   46108 cni.go:84] Creating CNI manager for ""
	I0823 19:02:26.230828   46108 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 19:02:26.230848   46108 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 19:02:26.230871   46108 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.47 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-502460 NodeName:running-upgrade-502460 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0823 19:02:26.231036   46108 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "running-upgrade-502460"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 19:02:26.231130   46108 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-502460 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:running-upgrade-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 19:02:26.231200   46108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0823 19:02:26.257566   46108 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 19:02:26.257644   46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 19:02:26.277182   46108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0823 19:02:26.310748   46108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 19:02:26.345712   46108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0823 19:02:26.371696   46108 ssh_runner.go:195] Run: grep 192.168.61.47	control-plane.minikube.internal$ /etc/hosts
	I0823 19:02:26.385728   46108 certs.go:56] Setting up /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460 for IP: 192.168.61.47
	I0823 19:02:26.385769   46108 certs.go:190] acquiring lock for shared ca certs: {Name:mk306615e8137283da7a256d08e7c92ef0f9dd28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 19:02:26.385934   46108 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key
	I0823 19:02:26.385996   46108 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key
	I0823 19:02:26.386100   46108 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.key
	I0823 19:02:26.386179   46108 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.key.85e7fa4e
	I0823 19:02:26.386250   46108 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.key
	I0823 19:02:26.386401   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem (1338 bytes)
	W0823 19:02:26.386460   46108 certs.go:433] ignoring /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0823 19:02:26.386477   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 19:02:26.386514   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem (1078 bytes)
	I0823 19:02:26.386562   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem (1123 bytes)
	I0823 19:02:26.386596   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem (1675 bytes)
	I0823 19:02:26.386650   46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0823 19:02:26.387300   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 19:02:26.454265   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 19:02:26.492287   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 19:02:26.553819   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 19:02:26.579785   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 19:02:26.613764   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0823 19:02:26.632598   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 19:02:26.668044   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0823 19:02:26.687571   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0823 19:02:26.706734   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 19:02:26.731144   46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0823 19:02:26.751549   46108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 19:02:26.768580   46108 ssh_runner.go:195] Run: openssl version
	I0823 19:02:26.777149   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0823 19:02:26.796389   46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0823 19:02:26.803710   46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:20 /usr/share/ca-certificates/183722.pem
	I0823 19:02:26.803760   46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0823 19:02:26.812888   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0823 19:02:26.828576   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 19:02:26.844331   46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 19:02:26.859879   46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0823 19:02:26.859938   46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 19:02:26.879653   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 19:02:26.892331   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0823 19:02:26.912975   46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0823 19:02:26.922612   46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:20 /usr/share/ca-certificates/18372.pem
	I0823 19:02:26.922669   46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0823 19:02:26.931699   46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0823 19:02:26.942427   46108 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 19:02:26.947953   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0823 19:02:26.956823   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0823 19:02:26.966249   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0823 19:02:26.974865   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0823 19:02:26.982698   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0823 19:02:26.989275   46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0823 19:02:26.995927   46108 kubeadm.go:404] StartCluster: {Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrad
e-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 19:02:26.996018   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0823 19:02:26.996063   46108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0823 19:02:27.016716   46108 cri.go:89] found id: "4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14"
	I0823 19:02:27.016730   46108 cri.go:89] found id: "e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292"
	I0823 19:02:27.016735   46108 cri.go:89] found id: "3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603"
	I0823 19:02:27.016738   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:02:27.016741   46108 cri.go:89] found id: ""
	I0823 19:02:27.016782   46108 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0823 19:02:27.047809   46108 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603","pid":4636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603/rootfs","created":"2023-08-23T19:02:25.299486267Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14","pid":4744,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14/rootfs","created":"2023-08-23T19:02:26.819913613Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","pid":4426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8/rootfs","created":"2023-08-23T19:02:23.848636081Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kub
e-system_kube-scheduler-running-upgrade-502460_cef8b9b3c429b31bd63c3b57b52e975c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","pid":4419,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc/rootfs","created":"2023-08-23T19:02:23.852381438Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-running-upgrade-502460_2c981615bb2d798c2adffe440f9b1774"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","pid":4526,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45/rootfs","created":"2023-08-23T19:02:24.44108726Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-502460_98177f65ecff0fba7d65a15845b2e250"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","pid":4396,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a/rootfs","created":"2023-08-23T19:02:23.791324924Z","annotations":{"io.ku
bernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_cfbb4f1b-ea68-4fb2-9ea5-2c900170cd7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e","pid":4627,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e/rootfs","created":"2023-08-23T19:02:25.239549514Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8"},"owner":"root"}]
	I0823 19:02:27.047959   46108 cri.go:126] list returned 7 containers
	I0823 19:02:27.047976   46108 cri.go:129] container: {ID:3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 Status:running}
	I0823 19:02:27.047995   46108 cri.go:135] skipping {3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 running}: state = "running", want "paused"
	I0823 19:02:27.048007   46108 cri.go:129] container: {ID:4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 Status:running}
	I0823 19:02:27.048012   46108 cri.go:135] skipping {4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 running}: state = "running", want "paused"
	I0823 19:02:27.048018   46108 cri.go:129] container: {ID:59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8 Status:running}
	I0823 19:02:27.048026   46108 cri.go:131] skipping 59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8 - not in ps
	I0823 19:02:27.048031   46108 cri.go:129] container: {ID:825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc Status:running}
	I0823 19:02:27.048036   46108 cri.go:131] skipping 825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc - not in ps
	I0823 19:02:27.048040   46108 cri.go:129] container: {ID:8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45 Status:running}
	I0823 19:02:27.048051   46108 cri.go:131] skipping 8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45 - not in ps
	I0823 19:02:27.048058   46108 cri.go:129] container: {ID:a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a Status:running}
	I0823 19:02:27.048071   46108 cri.go:131] skipping a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a - not in ps
	I0823 19:02:27.048081   46108 cri.go:129] container: {ID:abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e Status:running}
	I0823 19:02:27.048090   46108 cri.go:135] skipping {abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e running}: state = "running", want "paused"
	I0823 19:02:27.048141   46108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 19:02:27.056549   46108 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0823 19:02:27.056564   46108 kubeadm.go:636] restartCluster start
	I0823 19:02:27.056615   46108 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0823 19:02:27.065569   46108 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0823 19:02:27.066184   46108 kubeconfig.go:135] verify returned: extract IP: "running-upgrade-502460" does not appear in /home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 19:02:27.066512   46108 kubeconfig.go:146] "running-upgrade-502460" context is missing from /home/jenkins/minikube-integration/17086-11104/kubeconfig - will repair!
	I0823 19:02:27.067042   46108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17086-11104/kubeconfig: {Name:mkb6ab3495f5663c5ba2bb1ce0b9748373e0a0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 19:02:27.067885   46108 kapi.go:59] client config for running-upgrade-502460: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.crt", KeyFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.key", CAFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0823 19:02:27.068766   46108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0823 19:02:27.076116   46108 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -52,6 +52,8 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	@@ -68,3 +70,7 @@
	 metricsBindAddress: 0.0.0.0:10249
	 conntrack:
	   maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0823 19:02:27.076135   46108 kubeadm.go:1128] stopping kube-system containers ...
	I0823 19:02:27.076146   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0823 19:02:27.076193   46108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0823 19:02:27.097890   46108 cri.go:89] found id: "4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14"
	I0823 19:02:27.097916   46108 cri.go:89] found id: "e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292"
	I0823 19:02:27.097934   46108 cri.go:89] found id: "3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603"
	I0823 19:02:27.097940   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:02:27.097945   46108 cri.go:89] found id: ""
	I0823 19:02:27.097951   46108 cri.go:234] Stopping containers: [4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:02:27.098010   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:02:27.101961   46108 ssh_runner.go:195] Run: sudo /bin/crictl stop --timeout=10 4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e
	I0823 19:02:37.488386   46108 ssh_runner.go:235] Completed: sudo /bin/crictl stop --timeout=10 4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e: (10.386376842s)
	I0823 19:02:37.488473   46108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0823 19:02:37.555491   46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 19:02:37.566508   46108 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 23 19:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 23 19:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 23 19:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 23 19:01 /etc/kubernetes/scheduler.conf
	
	I0823 19:02:37.566582   46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0823 19:02:37.574689   46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0823 19:02:37.583211   46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0823 19:02:37.591289   46108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0823 19:02:37.591349   46108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0823 19:02:37.599844   46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0823 19:02:37.609984   46108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0823 19:02:37.610061   46108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0823 19:02:37.619800   46108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 19:02:37.631832   46108 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0823 19:02:37.631851   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 19:02:37.826333   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 19:02:39.040103   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213734593s)
	I0823 19:02:39.040141   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0823 19:02:39.317183   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 19:02:39.443794   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0823 19:02:39.544977   46108 api_server.go:52] waiting for apiserver process to appear ...
	I0823 19:02:39.545056   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:39.554961   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:40.067059   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:40.566917   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:41.067486   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:41.567526   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:42.067403   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:42.567664   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:43.067041   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:43.566942   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:44.067670   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:44.567600   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:45.067435   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:45.566735   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:46.066756   46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 19:02:46.080206   46108 api_server.go:72] duration metric: took 6.535227462s to wait for apiserver process to appear ...
	I0823 19:02:46.080229   46108 api_server.go:88] waiting for apiserver healthz status ...
	I0823 19:02:46.080251   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:02:46.080694   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:02:46.080732   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:02:46.081104   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:02:46.581801   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:02:51.582161   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:02:51.582245   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:02:56.583338   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:02:56.583390   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:01.583946   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:03:01.583995   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:06.401118   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:41362->192.168.61.47:8443: read: connection reset by peer
	I0823 19:03:06.401161   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:06.401784   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:06.582150   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:06.582831   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:07.081468   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:07.082173   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:07.581776   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:07.582457   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:08.082118   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:08.082797   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:08.581328   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:08.581998   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:09.081532   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:09.082189   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:09.581363   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:09.691470   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:10.081556   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:10.082168   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:10.581783   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:10.582370   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:11.081989   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:11.082590   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:11.581962   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:11.582585   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:12.081906   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:12.082512   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:12.582144   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:12.582821   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:13.081170   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:13.081851   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:13.581384   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:13.582004   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:14.081571   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:14.082224   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:14.581528   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:14.582212   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:15.081859   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:15.082545   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:15.581855   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:15.582431   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:16.082055   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:16.082763   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:16.581292   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:16.581883   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:17.081705   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:17.082396   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:17.582003   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:17.582635   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:18.081194   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:18.081859   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:18.581388   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:18.582026   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:19.081528   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:19.082168   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:19.581320   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:19.581965   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:20.082014   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:20.082696   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:20.581224   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:20.581877   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:21.081279   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:21.081969   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:21.581478   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:21.582098   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:22.081664   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:22.082341   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:22.581932   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:22.582607   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:23.081175   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:23.081884   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:23.581236   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:23.581933   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:24.081337   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:24.081957   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:24.582191   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:24.582843   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:25.081258   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:25.081820   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:25.581364   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:25.581977   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:26.081514   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:26.082285   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:26.581887   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:26.582455   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:27.081348   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:27.082027   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:27.581464   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:27.582051   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:28.081601   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:28.082201   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:28.581867   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:28.582570   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:29.082227   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:29.082851   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:29.582168   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:34.583322   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:03:34.583362   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:39.583639   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:03:39.583689   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:44.584210   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:03:44.584258   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:49.585306   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:03:49.585368   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:49.585432   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:49.602749   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:49.602776   46108 cri.go:89] found id: "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867"
	I0823 19:03:49.602783   46108 cri.go:89] found id: ""
	I0823 19:03:49.602791   46108 logs.go:284] 2 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867]
	I0823 19:03:49.602847   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.607165   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.611706   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:49.611776   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:49.631507   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:49.631528   46108 cri.go:89] found id: ""
	I0823 19:03:49.631536   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:03:49.631591   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.636096   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:49.636150   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:49.652293   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:49.652316   46108 cri.go:89] found id: ""
	I0823 19:03:49.652325   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:03:49.652397   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.656017   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:49.656083   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:49.672398   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:49.672427   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:49.672434   46108 cri.go:89] found id: ""
	I0823 19:03:49.672443   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:03:49.672501   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.677411   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.681743   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:49.681797   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:49.706309   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:49.706334   46108 cri.go:89] found id: ""
	I0823 19:03:49.706343   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:03:49.706404   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.710957   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:49.711012   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:49.736053   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:49.736096   46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:49.736104   46108 cri.go:89] found id: ""
	I0823 19:03:49.736112   46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
	I0823 19:03:49.736157   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.741190   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.746922   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:49.746987   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:49.768045   46108 cri.go:89] found id: ""
	I0823 19:03:49.768069   46108 logs.go:284] 0 containers: []
	W0823 19:03:49.768077   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:49.768086   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:49.768146   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:49.807670   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:49.807694   46108 cri.go:89] found id: ""
	I0823 19:03:49.807703   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:03:49.807759   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:49.813718   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:49.813751   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:49.826162   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:03:49.826190   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:49.868495   46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
	I0823 19:03:49.868527   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:49.911658   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:03:49.911706   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:49.941852   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:03:49.941896   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:49.964808   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:03:49.964838   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:49.990986   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:03:49.991016   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:50.020221   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:03:50.020254   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:50.038854   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:50.038884   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:50.099395   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:03:50.099433   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:50.121023   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:03:50.121052   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:50.138361   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:50.138386   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:50.293892   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:50.293920   46108 logs.go:123] Gathering logs for kube-apiserver [8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867] ...
	I0823 19:03:50.293934   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867"
	W0823 19:03:50.314929   46108 logs.go:130] failed kube-apiserver [8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:03:50.311640    5852 remote_runtime.go:329] ContainerStatus "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": not found
	time="2023-08-23T19:03:50Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867\": not found"
	 output: 
	** stderr ** 
	E0823 19:03:50.311640    5852 remote_runtime.go:329] ContainerStatus "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": not found
	time="2023-08-23T19:03:50Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867\": not found"
	
	** /stderr **
	I0823 19:03:50.314973   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:03:50.314991   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:50.332180   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:50.332208   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:52.921123   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:52.921755   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:52.921813   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:52.921870   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:52.941407   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:52.941437   46108 cri.go:89] found id: ""
	I0823 19:03:52.941446   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:03:52.941516   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:52.945832   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:52.945904   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:52.965696   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:52.965717   46108 cri.go:89] found id: ""
	I0823 19:03:52.965725   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:03:52.965774   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:52.970033   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:52.970100   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:52.992730   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:52.992752   46108 cri.go:89] found id: ""
	I0823 19:03:52.992760   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:03:52.992829   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:52.997556   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:52.997631   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:53.020896   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:53.020927   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:53.020934   46108 cri.go:89] found id: ""
	I0823 19:03:53.020947   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:03:53.021006   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.025657   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.029353   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:53.029408   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:53.048787   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:53.048809   46108 cri.go:89] found id: ""
	I0823 19:03:53.048818   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:03:53.048883   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.052821   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:53.052883   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:53.073274   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:53.073300   46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:53.073307   46108 cri.go:89] found id: ""
	I0823 19:03:53.073316   46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
	I0823 19:03:53.073376   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.077467   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.082419   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:53.082484   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:53.101808   46108 cri.go:89] found id: ""
	I0823 19:03:53.101831   46108 logs.go:284] 0 containers: []
	W0823 19:03:53.101839   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:53.101844   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:53.101900   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:53.127415   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:53.127439   46108 cri.go:89] found id: ""
	I0823 19:03:53.127448   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:03:53.127501   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:53.132306   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:53.132336   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:53.216923   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:53.216950   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:03:53.216964   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:53.260783   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:03:53.260822   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:53.284064   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:03:53.284107   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:53.310696   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:53.310729   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:53.323691   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:03:53.323726   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:53.345293   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:03:53.345319   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:53.362319   46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
	I0823 19:03:53.362359   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:53.402288   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:03:53.402322   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:53.418818   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:53.418852   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:53.482743   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:53.482779   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:53.540645   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:03:53.540681   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:53.567472   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:03:53.567508   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:53.601354   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:03:53.601386   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:56.129596   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:56.130240   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:56.130283   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:56.130336   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:56.152583   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:56.152605   46108 cri.go:89] found id: ""
	I0823 19:03:56.152611   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:03:56.152658   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.158214   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:56.158289   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:56.178941   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:56.178967   46108 cri.go:89] found id: ""
	I0823 19:03:56.178977   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:03:56.179029   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.184905   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:56.184979   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:56.205181   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:56.205216   46108 cri.go:89] found id: ""
	I0823 19:03:56.205227   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:03:56.205284   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.211073   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:56.211148   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:56.232446   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:56.232473   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:56.232480   46108 cri.go:89] found id: ""
	I0823 19:03:56.232488   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:03:56.232550   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.238030   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.243248   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:56.243318   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:56.259395   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:56.259419   46108 cri.go:89] found id: ""
	I0823 19:03:56.259427   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:03:56.259482   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.263495   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:56.263621   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:56.280830   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:56.280858   46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:56.280865   46108 cri.go:89] found id: ""
	I0823 19:03:56.280874   46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
	I0823 19:03:56.280939   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.286370   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.290218   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:56.290282   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:56.313418   46108 cri.go:89] found id: ""
	I0823 19:03:56.313440   46108 logs.go:284] 0 containers: []
	W0823 19:03:56.313447   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:56.313454   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:56.313522   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:56.332979   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:56.333010   46108 cri.go:89] found id: ""
	I0823 19:03:56.333018   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:03:56.333064   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:56.337242   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:03:56.337268   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:56.354521   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:03:56.354557   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:56.378351   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:03:56.378390   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:56.425790   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:56.425835   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:56.487011   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:56.487048   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:56.501482   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:56.501519   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:56.599128   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:56.599161   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:03:56.599175   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:56.630149   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:03:56.630188   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:56.646749   46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
	I0823 19:03:56.646776   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:56.680992   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:56.681083   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:56.754304   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:03:56.754342   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:56.781292   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:03:56.781320   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:56.810682   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:03:56.810709   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:56.839836   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:03:56.839866   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:59.358630   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:03:59.359423   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:03:59.359486   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:59.359547   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:59.389657   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:59.389682   46108 cri.go:89] found id: ""
	I0823 19:03:59.389691   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:03:59.389752   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.394178   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:59.394251   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:59.414275   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:59.414303   46108 cri.go:89] found id: ""
	I0823 19:03:59.414312   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:03:59.414378   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.419333   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:59.419410   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:59.440733   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:59.440765   46108 cri.go:89] found id: ""
	I0823 19:03:59.440774   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:03:59.440830   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.446509   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:59.446586   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:59.468196   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:59.468222   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:03:59.468229   46108 cri.go:89] found id: ""
	I0823 19:03:59.468238   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:03:59.468302   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.474500   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.480335   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:59.480397   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:59.504546   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:03:59.504569   46108 cri.go:89] found id: ""
	I0823 19:03:59.504576   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:03:59.504627   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.510731   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:59.510815   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:59.529519   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:03:59.529567   46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:03:59.529574   46108 cri.go:89] found id: ""
	I0823 19:03:59.529583   46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
	I0823 19:03:59.529646   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.534003   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.538363   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:59.538432   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:59.557294   46108 cri.go:89] found id: ""
	I0823 19:03:59.557316   46108 logs.go:284] 0 containers: []
	W0823 19:03:59.557323   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:59.557328   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:59.557377   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:59.577710   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:59.577733   46108 cri.go:89] found id: ""
	I0823 19:03:59.577746   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:03:59.577807   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:03:59.583075   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:03:59.583102   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:03:59.603621   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:03:59.603659   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:03:59.649624   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:03:59.649663   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:03:59.676391   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:59.676422   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:59.758447   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:59.758483   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:59.820304   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:59.820346   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:59.903942   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:59.903985   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:03:59.904000   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:03:59.924560   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:03:59.924593   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:59.955653   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:59.955678   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:59.967160   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:03:59.967189   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:03:59.986487   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:03:59.986514   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:00.010795   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:00.010827   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:00.046262   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:00.046298   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:00.063940   46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
	I0823 19:04:00.063980   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
	I0823 19:04:02.603550   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:02.604324   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:02.604375   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:02.604445   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:02.628170   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:02.628193   46108 cri.go:89] found id: ""
	I0823 19:04:02.628200   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:04:02.628254   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.632596   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:02.632671   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:02.653173   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:02.653196   46108 cri.go:89] found id: ""
	I0823 19:04:02.653203   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:04:02.653256   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.659210   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:02.659263   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:02.680490   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:02.680512   46108 cri.go:89] found id: ""
	I0823 19:04:02.680519   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:02.680567   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.687686   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:02.687745   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:02.708135   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:02.708153   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:02.708157   46108 cri.go:89] found id: ""
	I0823 19:04:02.708163   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:02.708216   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.712890   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.717324   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:02.717379   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:02.734883   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:02.734917   46108 cri.go:89] found id: ""
	I0823 19:04:02.734927   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:02.734985   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.739344   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:02.739400   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:02.755954   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:02.755981   46108 cri.go:89] found id: ""
	I0823 19:04:02.755990   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:02.756053   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.760162   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:02.760232   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:02.778881   46108 cri.go:89] found id: ""
	I0823 19:04:02.778908   46108 logs.go:284] 0 containers: []
	W0823 19:04:02.778919   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:02.778926   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:02.778994   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:02.796893   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:02.796918   46108 cri.go:89] found id: ""
	I0823 19:04:02.796927   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:02.796984   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:02.802046   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:04:02.802073   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:02.822943   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:02.822979   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:02.851708   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:02.851741   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:02.889674   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:02.889720   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:02.911408   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:02.911445   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:02.944479   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:02.944504   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:02.970681   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:02.970712   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:02.997753   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:02.997785   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:03.060708   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:03.060745   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:03.127019   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:03.127056   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:03.140719   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:03.140757   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:03.246015   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:03.246042   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:04:03.246056   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:03.266591   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:03.266619   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:05.799418   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:05.800151   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:05.800212   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:05.800267   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:05.823657   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:05.823679   46108 cri.go:89] found id: ""
	I0823 19:04:05.823688   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:04:05.823743   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.829705   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:05.829775   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:05.850755   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:05.850795   46108 cri.go:89] found id: ""
	I0823 19:04:05.850803   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:04:05.850854   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.856211   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:05.856276   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:05.875778   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:05.875797   46108 cri.go:89] found id: ""
	I0823 19:04:05.875806   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:05.875863   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.880835   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:05.880901   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:05.899063   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:05.899088   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:05.899095   46108 cri.go:89] found id: ""
	I0823 19:04:05.899104   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:05.899157   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.903709   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.907885   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:05.907948   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:05.927949   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:05.927969   46108 cri.go:89] found id: ""
	I0823 19:04:05.927976   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:05.928029   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.932434   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:05.932493   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:05.951008   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:05.951031   46108 cri.go:89] found id: ""
	I0823 19:04:05.951039   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:05.951093   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.958246   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:05.958297   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:05.975436   46108 cri.go:89] found id: ""
	I0823 19:04:05.975463   46108 logs.go:284] 0 containers: []
	W0823 19:04:05.975474   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:05.975482   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:05.975546   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:05.993826   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:05.993874   46108 cri.go:89] found id: ""
	I0823 19:04:05.993883   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:05.993952   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:05.998471   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:04:05.998491   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:06.015413   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:06.015450   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:06.039783   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:06.039817   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:06.066586   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:06.066624   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:06.102752   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:06.102783   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:06.169165   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:06.169200   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:06.190726   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:06.190756   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:06.208930   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:06.208957   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:06.277589   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:06.277635   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:06.289477   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:06.289505   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:06.388348   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:06.388374   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:04:06.388386   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:06.407928   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:06.407959   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:06.438719   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:06.438751   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:08.977132   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:08.977781   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:08.977832   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:08.977882   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:08.998294   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:08.998315   46108 cri.go:89] found id: ""
	I0823 19:04:08.998321   46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:04:08.998371   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.002307   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:09.002377   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:09.023257   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:09.023292   46108 cri.go:89] found id: ""
	I0823 19:04:09.023308   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:04:09.023371   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.027561   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:09.027630   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:09.044233   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:09.044253   46108 cri.go:89] found id: ""
	I0823 19:04:09.044259   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:09.044312   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.048205   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:09.048275   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:09.064091   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:09.064114   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:09.064119   46108 cri.go:89] found id: ""
	I0823 19:04:09.064125   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:09.064175   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.068223   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.072391   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:09.072457   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:09.089261   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:09.089285   46108 cri.go:89] found id: ""
	I0823 19:04:09.089293   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:09.089351   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.093647   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:09.093713   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:09.110349   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:09.110366   46108 cri.go:89] found id: ""
	I0823 19:04:09.110372   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:09.110415   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.114495   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:09.114558   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:09.133422   46108 cri.go:89] found id: ""
	I0823 19:04:09.133446   46108 logs.go:284] 0 containers: []
	W0823 19:04:09.133456   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:09.133464   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:09.133512   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:09.149623   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:09.149645   46108 cri.go:89] found id: ""
	I0823 19:04:09.149653   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:09.149715   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:09.153567   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:09.153599   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:09.171390   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:09.171416   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:09.241594   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:09.241636   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:09.252767   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:04:09.252793   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:09.283901   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:04:09.283937   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:09.299355   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:09.299386   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:09.320130   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:09.320166   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:09.349557   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:09.349587   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:09.381178   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:09.381211   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:09.407571   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:09.407600   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:09.468555   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:09.468593   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:09.560084   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:09.561144   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:09.561163   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:09.599559   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:09.599590   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:12.146416   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:17.147709   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:04:17.147788   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:17.147842   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:17.167915   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:17.167944   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:17.167953   46108 cri.go:89] found id: ""
	I0823 19:04:17.167967   46108 logs.go:284] 2 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:04:17.168025   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.172621   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.176588   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:17.176637   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:17.194115   46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	I0823 19:04:17.194137   46108 cri.go:89] found id: ""
	I0823 19:04:17.194146   46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
	I0823 19:04:17.194195   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.198195   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:17.198249   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:17.212835   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:17.212857   46108 cri.go:89] found id: ""
	I0823 19:04:17.212866   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:17.212915   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.216741   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:17.216802   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:17.237109   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:17.237138   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:17.237144   46108 cri.go:89] found id: ""
	I0823 19:04:17.237153   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:17.237215   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.241499   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.246670   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:17.246738   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:17.267560   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:17.267586   46108 cri.go:89] found id: ""
	I0823 19:04:17.267596   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:17.267654   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.272746   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:17.272818   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:17.288413   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:17.288431   46108 cri.go:89] found id: ""
	I0823 19:04:17.288439   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:17.288497   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.293366   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:17.293413   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:17.308748   46108 cri.go:89] found id: ""
	I0823 19:04:17.308774   46108 logs.go:284] 0 containers: []
	W0823 19:04:17.308785   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:17.308792   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:17.308852   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:17.329847   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:17.329872   46108 cri.go:89] found id: ""
	I0823 19:04:17.329881   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:17.329936   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:17.335095   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:17.335121   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:17.373018   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:04:17.373057   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:17.395253   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:17.395278   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:17.425070   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:17.425110   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:17.466206   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:17.466234   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:17.491846   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:17.491876   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:17.519607   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:17.519635   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0823 19:04:27.618486   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.09883103s)
	W0823 19:04:27.618554   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0823 19:04:27.618566   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:27.618580   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:27.641768   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:27.641793   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:27.669512   46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
	I0823 19:04:27.669550   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
	W0823 19:04:27.686673   46108 logs.go:130] failed etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:04:27.682272    6645 remote_runtime.go:329] ContainerStatus "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": not found
	time="2023-08-23T19:04:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0\": not found"
	 output: 
	** stderr ** 
	E0823 19:04:27.682272    6645 remote_runtime.go:329] ContainerStatus "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": not found
	time="2023-08-23T19:04:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0\": not found"
	
	** /stderr **
	I0823 19:04:27.686697   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:27.686711   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:27.750311   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:27.750344   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:27.813436   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:27.813471   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:27.833635   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:27.833661   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:30.351863   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:30.550221   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:36772->192.168.61.47:8443: read: connection reset by peer
	I0823 19:04:30.550285   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:30.550353   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:30.570519   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:30.570544   46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:30.570550   46108 cri.go:89] found id: ""
	I0823 19:04:30.570558   46108 logs.go:284] 2 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
	I0823 19:04:30.570614   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.576052   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.580004   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:30.580086   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:30.609883   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:30.609908   46108 cri.go:89] found id: ""
	I0823 19:04:30.609917   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:30.609965   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.615842   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:30.615917   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:30.647642   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:30.647665   46108 cri.go:89] found id: ""
	I0823 19:04:30.647673   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:30.647741   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.652938   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:30.653002   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:30.675187   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:30.675215   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:30.675222   46108 cri.go:89] found id: ""
	I0823 19:04:30.675231   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:30.675288   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.680341   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.685856   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:30.685932   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:30.706478   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:30.706504   46108 cri.go:89] found id: ""
	I0823 19:04:30.706513   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:30.706569   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.711231   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:30.711297   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:30.728230   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:30.728257   46108 cri.go:89] found id: ""
	I0823 19:04:30.728267   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:30.728335   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.734320   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:30.734392   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:30.751781   46108 cri.go:89] found id: ""
	I0823 19:04:30.751806   46108 logs.go:284] 0 containers: []
	W0823 19:04:30.751816   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:30.751824   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:30.751882   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:30.774806   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:30.774831   46108 cri.go:89] found id: ""
	I0823 19:04:30.774840   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:30.774904   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:30.779712   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:30.779742   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:30.799413   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:30.799447   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:30.828917   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:30.828947   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:30.893361   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:30.893395   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:30.989250   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:30.989272   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:30.989282   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:31.016789   46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
	I0823 19:04:31.016820   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
	I0823 19:04:31.038094   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:31.038124   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:31.052980   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:31.053011   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:31.070711   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:31.070742   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:31.110828   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:31.110861   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:31.204670   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:31.204705   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:31.225462   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:31.225504   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:31.263445   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:31.263478   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:31.293188   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:31.293226   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:33.826359   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:33.827025   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:33.827079   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:33.827133   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:33.846362   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:33.846392   46108 cri.go:89] found id: ""
	I0823 19:04:33.846401   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:33.846451   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.850535   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:33.850595   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:33.868301   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:33.868323   46108 cri.go:89] found id: ""
	I0823 19:04:33.868331   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:33.868386   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.872403   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:33.872488   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:33.892188   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:33.892217   46108 cri.go:89] found id: ""
	I0823 19:04:33.892226   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:33.892285   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.896023   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:33.896080   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:33.913400   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:33.913420   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:33.913425   46108 cri.go:89] found id: ""
	I0823 19:04:33.913431   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:33.913479   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.918329   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.923040   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:33.923112   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:33.943496   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:33.943523   46108 cri.go:89] found id: ""
	I0823 19:04:33.943533   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:33.943590   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.947871   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:33.947924   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:33.967460   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:33.967478   46108 cri.go:89] found id: ""
	I0823 19:04:33.967486   46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:33.967550   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:33.972019   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:33.972083   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:33.992206   46108 cri.go:89] found id: ""
	I0823 19:04:33.992230   46108 logs.go:284] 0 containers: []
	W0823 19:04:33.992239   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:33.992248   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:33.992305   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:34.012861   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:34.012884   46108 cri.go:89] found id: ""
	I0823 19:04:34.012892   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:34.012956   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:34.018211   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:34.018243   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:34.042458   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:34.042492   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:34.061290   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:34.061317   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:34.113097   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:34.113134   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:34.138722   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:34.138748   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:34.151729   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:34.151752   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:34.245758   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:34.245779   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:34.245794   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:34.265608   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:34.265637   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:34.290654   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:34.290683   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:34.322342   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:34.322383   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:34.363350   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:34.363394   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:34.384170   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:34.384197   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:34.453756   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:34.453799   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:37.023185   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:37.023946   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:37.023993   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:37.024036   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:37.050342   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:37.050366   46108 cri.go:89] found id: ""
	I0823 19:04:37.050375   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:37.050430   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.054902   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:37.054953   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:37.073038   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:37.073059   46108 cri.go:89] found id: ""
	I0823 19:04:37.073068   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:37.073122   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.077691   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:37.077761   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:37.095129   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:37.095151   46108 cri.go:89] found id: ""
	I0823 19:04:37.095160   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:37.095215   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.099250   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:37.099308   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:37.117187   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:37.117205   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:37.117211   46108 cri.go:89] found id: ""
	I0823 19:04:37.117219   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:37.117276   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.122142   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.127299   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:37.127365   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:37.144191   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:37.144213   46108 cri.go:89] found id: ""
	I0823 19:04:37.144220   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:37.144265   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.150347   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:37.150404   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:37.170969   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:37.170989   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:37.170995   46108 cri.go:89] found id: ""
	I0823 19:04:37.171003   46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:37.171051   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.175726   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.181727   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:37.181776   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:37.199831   46108 cri.go:89] found id: ""
	I0823 19:04:37.199856   46108 logs.go:284] 0 containers: []
	W0823 19:04:37.199866   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:37.199873   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:37.199931   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:37.217009   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:37.217030   46108 cri.go:89] found id: ""
	I0823 19:04:37.217038   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:37.217075   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:37.221307   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:37.221328   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:37.243240   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:37.243265   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:37.266080   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:37.266108   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:37.287448   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:37.287476   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:37.313643   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:37.313670   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:37.332010   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:37.332036   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:37.401934   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:37.401966   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:37.423032   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:37.423051   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:37.460361   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:37.460389   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:37.483235   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:37.483267   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:37.517899   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:37.517927   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:37.548071   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:37.548103   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:37.619832   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:37.619866   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:37.631690   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:37.631723   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:37.730233   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:40.230746   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:40.231346   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:40.231403   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:40.231464   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:40.256053   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:40.256079   46108 cri.go:89] found id: ""
	I0823 19:04:40.256087   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:40.256140   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.261394   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:40.261461   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:40.282848   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:40.282868   46108 cri.go:89] found id: ""
	I0823 19:04:40.282877   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:40.282924   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.287836   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:40.287902   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:40.307273   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:40.307295   46108 cri.go:89] found id: ""
	I0823 19:04:40.307303   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:40.307352   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.313523   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:40.313606   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:40.330071   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:40.330088   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:40.330091   46108 cri.go:89] found id: ""
	I0823 19:04:40.330098   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:40.330140   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.334144   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.339025   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:40.339076   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:40.359547   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:40.359568   46108 cri.go:89] found id: ""
	I0823 19:04:40.359577   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:40.359632   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.364039   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:40.364107   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:40.382590   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:40.382617   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:40.382641   46108 cri.go:89] found id: ""
	I0823 19:04:40.382648   46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:40.382696   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.386839   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.390744   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:40.390806   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:40.408339   46108 cri.go:89] found id: ""
	I0823 19:04:40.408361   46108 logs.go:284] 0 containers: []
	W0823 19:04:40.408368   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:40.408374   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:40.408422   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:40.433691   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:40.433716   46108 cri.go:89] found id: ""
	I0823 19:04:40.433725   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:40.433775   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:40.440794   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:40.440825   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:40.467202   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:40.467239   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:40.501843   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:40.501874   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:40.577973   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:40.578008   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:40.605799   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:40.605838   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:40.620098   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:40.620133   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:40.725365   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:40.725393   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:40.725406   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:40.751398   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:40.751433   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:40.815756   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:40.815786   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:40.841439   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:40.841470   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:40.868326   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:40.868363   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:40.908012   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:40.908057   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:40.931270   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:40.931304   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:40.970295   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:40.970326   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:43.493052   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:43.493798   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:43.493843   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:43.493899   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:43.514176   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:43.514200   46108 cri.go:89] found id: ""
	I0823 19:04:43.514211   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:43.514270   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.518295   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:43.518362   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:43.536645   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:43.536670   46108 cri.go:89] found id: ""
	I0823 19:04:43.536679   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:43.536726   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.540651   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:43.540715   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:43.556125   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:43.556149   46108 cri.go:89] found id: ""
	I0823 19:04:43.556158   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:43.556212   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.560202   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:43.560265   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:43.578794   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:43.578816   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:43.578820   46108 cri.go:89] found id: ""
	I0823 19:04:43.578827   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:43.578869   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.583167   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.587509   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:43.587579   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:43.603744   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:43.603770   46108 cri.go:89] found id: ""
	I0823 19:04:43.603780   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:43.603831   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.607821   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:43.607892   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:43.626283   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:43.626303   46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:43.626306   46108 cri.go:89] found id: ""
	I0823 19:04:43.626313   46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
	I0823 19:04:43.626356   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.630632   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.634182   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:43.634235   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:43.657504   46108 cri.go:89] found id: ""
	I0823 19:04:43.657529   46108 logs.go:284] 0 containers: []
	W0823 19:04:43.657536   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:43.657560   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:43.657615   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:43.680354   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:43.680373   46108 cri.go:89] found id: ""
	I0823 19:04:43.680382   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:43.680438   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:43.684968   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:43.684988   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:43.724936   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:43.724981   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:43.747218   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:43.747247   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:43.814673   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:43.814707   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:43.843059   46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
	I0823 19:04:43.843088   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
	I0823 19:04:43.881388   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:43.881430   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:43.968570   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:43.968596   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:43.968610   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:43.994463   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:43.994493   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:44.004592   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:44.004619   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:44.024487   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:44.024515   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:44.044095   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:44.044126   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:44.080196   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:44.080234   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:44.101008   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:44.101043   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:44.174655   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:44.174688   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:46.696346   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:46.696930   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:46.696983   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:46.697022   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:46.715814   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:46.715837   46108 cri.go:89] found id: ""
	I0823 19:04:46.715847   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:46.715903   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.720540   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:46.720607   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:46.738601   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:46.738625   46108 cri.go:89] found id: ""
	I0823 19:04:46.738634   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:46.738690   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.742455   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:46.742518   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:46.759354   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:46.759379   46108 cri.go:89] found id: ""
	I0823 19:04:46.759388   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:46.759439   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.763540   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:46.763603   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:46.780565   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:46.780588   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:46.780595   46108 cri.go:89] found id: ""
	I0823 19:04:46.780602   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:46.780655   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.784494   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.789519   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:46.789601   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:46.804832   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:46.804851   46108 cri.go:89] found id: ""
	I0823 19:04:46.804860   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:46.804919   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.808776   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:46.808833   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:46.825754   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:46.825776   46108 cri.go:89] found id: ""
	I0823 19:04:46.825784   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:04:46.825838   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.829497   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:46.829559   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:46.846726   46108 cri.go:89] found id: ""
	I0823 19:04:46.846750   46108 logs.go:284] 0 containers: []
	W0823 19:04:46.846759   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:46.846767   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:46.846823   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:46.863686   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:46.863710   46108 cri.go:89] found id: ""
	I0823 19:04:46.863718   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:46.863772   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:46.867477   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:46.867497   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:46.888008   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:46.888037   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:46.912444   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:46.912471   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:46.949715   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:46.949745   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:46.980070   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:46.980103   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:47.049168   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:47.049210   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:47.074971   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:47.075010   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:47.147532   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:47.147564   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:47.159764   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:47.159813   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:47.247590   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:47.247621   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:47.247635   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:47.264857   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:47.264885   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:47.307165   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:47.307201   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:47.333410   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:47.333453   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:49.874688   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:49.875267   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:49.875313   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:49.875361   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:49.893504   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:49.893527   46108 cri.go:89] found id: ""
	I0823 19:04:49.893536   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:49.893609   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.897743   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:49.897811   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:49.916405   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:49.916427   46108 cri.go:89] found id: ""
	I0823 19:04:49.916437   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:49.916499   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.921706   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:49.921774   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:49.940758   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:49.940780   46108 cri.go:89] found id: ""
	I0823 19:04:49.940789   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:49.940842   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.944971   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:49.945041   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:49.963866   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:49.963887   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:49.963891   46108 cri.go:89] found id: ""
	I0823 19:04:49.963897   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:49.963939   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.968271   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.972063   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:49.972131   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:49.989051   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:49.989096   46108 cri.go:89] found id: ""
	I0823 19:04:49.989106   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:49.989166   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:49.992874   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:49.992936   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:50.008836   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:50.008862   46108 cri.go:89] found id: ""
	I0823 19:04:50.008871   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:04:50.008934   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:50.013122   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:50.013198   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:50.028587   46108 cri.go:89] found id: ""
	I0823 19:04:50.028610   46108 logs.go:284] 0 containers: []
	W0823 19:04:50.028620   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:50.028628   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:50.028690   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:50.045391   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:50.045418   46108 cri.go:89] found id: ""
	I0823 19:04:50.045427   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:50.045479   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:50.050677   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:50.050701   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:50.092067   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:50.092101   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:50.115413   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:50.115450   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:50.133086   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:50.133116   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:50.221813   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:50.221842   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:50.221856   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:50.250981   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:50.251009   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:50.273652   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:50.273683   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:50.301973   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:50.302008   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:50.336341   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:50.336377   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:50.354493   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:50.354525   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:50.418714   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:50.418756   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:50.430688   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:50.430716   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:50.496924   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:50.496973   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:53.018726   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:53.019380   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:53.019429   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:53.019471   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:53.037622   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:53.037642   46108 cri.go:89] found id: ""
	I0823 19:04:53.037649   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:53.037706   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.041854   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:53.041923   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:53.062451   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:53.062473   46108 cri.go:89] found id: ""
	I0823 19:04:53.062481   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:53.062536   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.067317   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:53.067388   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:53.086936   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:53.086976   46108 cri.go:89] found id: ""
	I0823 19:04:53.086985   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:53.087049   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.091960   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:53.092032   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:53.111873   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:53.111897   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:53.111904   46108 cri.go:89] found id: ""
	I0823 19:04:53.111912   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:53.111972   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.116680   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.121269   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:53.121323   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:53.143085   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:53.143106   46108 cri.go:89] found id: ""
	I0823 19:04:53.143117   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:53.143177   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.148747   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:53.148816   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:53.169554   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:53.169575   46108 cri.go:89] found id: ""
	I0823 19:04:53.169582   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:04:53.169636   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.173508   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:53.173586   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:53.192842   46108 cri.go:89] found id: ""
	I0823 19:04:53.192867   46108 logs.go:284] 0 containers: []
	W0823 19:04:53.192876   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:53.192883   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:53.192941   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:53.212551   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:53.212576   46108 cri.go:89] found id: ""
	I0823 19:04:53.212585   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:53.212640   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:53.216429   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:53.216455   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:53.246843   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:53.246870   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:53.259496   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:53.259591   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:53.281158   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:53.281195   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:53.323763   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:53.323802   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:53.347834   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:53.347869   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:53.383646   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:53.383680   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:53.406649   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:53.406686   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:53.436771   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:53.436806   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:53.454754   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:53.454791   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:53.517906   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:53.517937   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:53.594842   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:53.594874   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:53.594890   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:53.612568   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:53.612601   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:56.184122   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:56.184837   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:56.184903   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:56.184964   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:56.204535   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:56.204553   46108 cri.go:89] found id: ""
	I0823 19:04:56.204561   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:56.204615   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.209206   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:56.209268   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:56.225202   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:56.225228   46108 cri.go:89] found id: ""
	I0823 19:04:56.225237   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:56.225295   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.229865   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:56.229925   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:56.245380   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:56.245403   46108 cri.go:89] found id: ""
	I0823 19:04:56.245411   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:56.245463   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.249348   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:56.249407   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:56.265234   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:56.265259   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:56.265266   46108 cri.go:89] found id: ""
	I0823 19:04:56.265274   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:56.265328   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.269742   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.274208   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:56.274267   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:56.291420   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:56.291442   46108 cri.go:89] found id: ""
	I0823 19:04:56.291451   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:56.291504   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.295425   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:56.295491   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:56.314242   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:56.314264   46108 cri.go:89] found id: ""
	I0823 19:04:56.314272   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:04:56.314333   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.318433   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:56.318502   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:56.337506   46108 cri.go:89] found id: ""
	I0823 19:04:56.337527   46108 logs.go:284] 0 containers: []
	W0823 19:04:56.337535   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:56.337558   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:56.337618   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:56.356339   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:56.356364   46108 cri.go:89] found id: ""
	I0823 19:04:56.356374   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:56.356421   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:56.360620   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:04:56.360649   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:56.393943   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:56.393980   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:56.442409   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:04:56.442449   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:04:56.482753   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:56.482784   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:56.558447   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:56.558483   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:56.572072   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:56.572113   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:56.594869   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:56.594894   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:56.616072   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:56.616109   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:56.634784   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:56.634810   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:56.736082   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:56.736114   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:04:56.820648   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:04:56.820673   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:56.820687   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:56.867045   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:56.867088   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:56.892957   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:56.893002   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:59.423015   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:04:59.423805   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:04:59.423860   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:04:59.423919   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:04:59.444448   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:59.444466   46108 cri.go:89] found id: ""
	I0823 19:04:59.444472   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:04:59.444515   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.448579   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:04:59.448639   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:04:59.465677   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:59.465696   46108 cri.go:89] found id: ""
	I0823 19:04:59.465705   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:04:59.465761   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.471324   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:04:59.471405   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:04:59.490341   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:04:59.490358   46108 cri.go:89] found id: ""
	I0823 19:04:59.490365   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:04:59.490419   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.495979   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:04:59.496053   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:04:59.514142   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:59.514166   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:59.514173   46108 cri.go:89] found id: ""
	I0823 19:04:59.514181   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:04:59.514243   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.518120   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.521741   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:04:59.521792   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:04:59.537474   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:59.537497   46108 cri.go:89] found id: ""
	I0823 19:04:59.537506   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:04:59.537574   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.541355   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:04:59.541417   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:04:59.557486   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:59.557507   46108 cri.go:89] found id: ""
	I0823 19:04:59.557516   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:04:59.557581   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.562325   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:04:59.562387   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:04:59.579288   46108 cri.go:89] found id: ""
	I0823 19:04:59.579325   46108 logs.go:284] 0 containers: []
	W0823 19:04:59.579334   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:04:59.579342   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:04:59.579397   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:04:59.598389   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:59.598416   46108 cri.go:89] found id: ""
	I0823 19:04:59.598426   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:04:59.598484   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:04:59.606603   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:04:59.606634   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:04:59.630620   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:04:59.630648   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:04:59.649254   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:04:59.649292   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:04:59.682830   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:04:59.682870   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:04:59.726266   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:04:59.726301   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:04:59.776539   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:04:59.776585   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:04:59.804619   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:04:59.804660   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:04:59.826112   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:04:59.826148   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:04:59.918310   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:04:59.918345   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:04:59.986811   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:04:59.986845   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:04:59.999558   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:04:59.999584   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:00.085399   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:00.085425   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:00.085440   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:00.105800   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:00.105833   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:02.636341   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:02.636901   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:02.636947   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:02.636988   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:02.653337   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:02.653363   46108 cri.go:89] found id: ""
	I0823 19:05:02.653372   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:02.653424   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.657063   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:02.657118   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:02.673116   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:02.673132   46108 cri.go:89] found id: ""
	I0823 19:05:02.673138   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:02.673187   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.676702   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:02.676757   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:02.693617   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:02.693632   46108 cri.go:89] found id: ""
	I0823 19:05:02.693639   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:02.693686   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.697578   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:02.697630   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:02.713130   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:02.713146   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:02.713150   46108 cri.go:89] found id: ""
	I0823 19:05:02.713158   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:02.713211   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.716808   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.720760   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:02.720830   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:02.738432   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:02.738457   46108 cri.go:89] found id: ""
	I0823 19:05:02.738467   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:02.738526   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.742127   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:02.742172   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:02.758113   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:02.758137   46108 cri.go:89] found id: ""
	I0823 19:05:02.758146   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:02.758192   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.762213   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:02.762266   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:02.781167   46108 cri.go:89] found id: ""
	I0823 19:05:02.781191   46108 logs.go:284] 0 containers: []
	W0823 19:05:02.781201   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:02.781209   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:02.781269   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:02.799120   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:02.799147   46108 cri.go:89] found id: ""
	I0823 19:05:02.799155   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:02.799204   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:02.803080   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:02.803097   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:02.819859   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:02.819882   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:02.880972   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:02.881002   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:02.902597   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:02.902625   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:02.923740   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:02.923775   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:02.962880   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:02.962912   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:02.988408   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:02.988436   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:03.026355   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:03.026388   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:03.058475   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:03.058507   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:03.079946   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:03.079975   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:03.153560   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:03.153611   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:03.165939   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:03.165971   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:03.244257   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:03.244285   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:03.244298   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:05.761659   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:05.762316   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:05.762370   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:05.762419   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:05.779566   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:05.779591   46108 cri.go:89] found id: ""
	I0823 19:05:05.779600   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:05.779656   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.784035   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:05.784095   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:05.800022   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:05.800051   46108 cri.go:89] found id: ""
	I0823 19:05:05.800060   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:05.800105   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.803608   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:05.803656   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:05.819262   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:05.819279   46108 cri.go:89] found id: ""
	I0823 19:05:05.819285   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:05.819329   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.823503   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:05.823567   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:05.841133   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:05.841149   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:05.841153   46108 cri.go:89] found id: ""
	I0823 19:05:05.841159   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:05.841209   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.845110   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.848624   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:05.848669   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:05.865134   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:05.865159   46108 cri.go:89] found id: ""
	I0823 19:05:05.865167   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:05.865209   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.869288   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:05.869355   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:05.885859   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:05.885892   46108 cri.go:89] found id: ""
	I0823 19:05:05.885901   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:05.885961   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.889755   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:05.889817   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:05.906717   46108 cri.go:89] found id: ""
	I0823 19:05:05.906757   46108 logs.go:284] 0 containers: []
	W0823 19:05:05.906768   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:05.906775   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:05.906832   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:05.921435   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:05.921460   46108 cri.go:89] found id: ""
	I0823 19:05:05.921468   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:05.921524   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:05.925488   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:05.925512   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:05.935886   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:05.935911   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:05.955300   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:05.955338   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:05.984584   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:05.984612   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:06.006430   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:06.006457   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:06.071360   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:06.071397   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:06.098227   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:06.098251   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:06.164341   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:06.164381   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:06.250145   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:06.250170   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:06.250185   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:06.268486   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:06.268517   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:06.308798   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:06.308831   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:06.338182   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:06.338213   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:06.368647   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:06.368679   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:08.885982   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:08.886569   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:08.886613   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:08.886657   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:08.903825   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:08.903851   46108 cri.go:89] found id: ""
	I0823 19:05:08.903861   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:08.903920   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:08.908376   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:08.908439   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:08.925898   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:08.925924   46108 cri.go:89] found id: ""
	I0823 19:05:08.925930   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:08.925988   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:08.930245   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:08.930315   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:08.947198   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:08.947223   46108 cri.go:89] found id: ""
	I0823 19:05:08.947231   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:08.947290   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:08.951593   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:08.951657   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:08.972355   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:08.972382   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:08.972389   46108 cri.go:89] found id: ""
	I0823 19:05:08.972398   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:08.972460   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:08.977006   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:08.981381   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:08.981450   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:08.997591   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:08.997617   46108 cri.go:89] found id: ""
	I0823 19:05:08.997626   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:08.997681   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:09.001971   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:09.002020   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:09.019841   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:09.019864   46108 cri.go:89] found id: ""
	I0823 19:05:09.019873   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:09.019931   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:09.024703   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:09.024770   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:09.041032   46108 cri.go:89] found id: ""
	I0823 19:05:09.041059   46108 logs.go:284] 0 containers: []
	W0823 19:05:09.041069   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:09.041077   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:09.041134   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:09.061258   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:09.061283   46108 cri.go:89] found id: ""
	I0823 19:05:09.061292   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:09.061347   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:09.065515   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:09.065556   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:09.132588   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:09.132632   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:09.143795   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:09.143825   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:09.227916   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:09.227941   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:09.227954   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:09.245188   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:09.245216   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:09.264861   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:09.264889   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:09.302495   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:09.302530   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:09.323552   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:09.323582   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:09.356325   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:09.356361   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:09.373837   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:09.373863   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:09.440687   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:09.440724   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:09.467916   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:09.467946   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:09.491139   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:09.491169   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:12.024943   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:12.025611   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:12.025661   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:12.025709   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:12.043465   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:12.043484   46108 cri.go:89] found id: ""
	I0823 19:05:12.043490   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:12.043530   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.047731   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:12.047801   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:12.063462   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:12.063485   46108 cri.go:89] found id: ""
	I0823 19:05:12.063493   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:12.063535   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.067085   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:12.067139   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:12.085253   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:12.085274   46108 cri.go:89] found id: ""
	I0823 19:05:12.085281   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:12.085333   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.089135   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:12.089194   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:12.105633   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:12.105653   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:12.105661   46108 cri.go:89] found id: ""
	I0823 19:05:12.105669   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:12.105739   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.109681   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.113420   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:12.113480   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:12.128387   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:12.128405   46108 cri.go:89] found id: ""
	I0823 19:05:12.128413   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:12.128469   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.132576   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:12.132637   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:12.150115   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:12.150134   46108 cri.go:89] found id: ""
	I0823 19:05:12.150141   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:12.150179   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.154174   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:12.154236   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:12.170640   46108 cri.go:89] found id: ""
	I0823 19:05:12.170660   46108 logs.go:284] 0 containers: []
	W0823 19:05:12.170666   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:12.170671   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:12.170725   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:12.188064   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:12.188086   46108 cri.go:89] found id: ""
	I0823 19:05:12.188098   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:12.188156   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:12.192292   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:12.192310   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:12.213933   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:12.213960   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:12.253648   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:12.253679   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:12.291294   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:12.291329   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:12.355231   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:12.355266   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:12.383271   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:12.383298   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:12.475229   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:12.475255   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:12.475269   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:12.487782   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:12.487816   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:12.507091   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:12.507131   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:12.527032   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:12.527058   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:12.552328   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:12.552373   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:12.587768   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:12.587798   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:12.606889   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:12.606922   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:15.182524   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:15.183169   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:15.183216   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:15.183261   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:15.207047   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:15.207068   46108 cri.go:89] found id: ""
	I0823 19:05:15.207077   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:15.207131   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.213209   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:15.213267   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:15.234240   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:15.234260   46108 cri.go:89] found id: ""
	I0823 19:05:15.234269   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:15.234318   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.242169   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:15.242220   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:15.271466   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:15.271487   46108 cri.go:89] found id: ""
	I0823 19:05:15.271493   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:15.271534   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.276970   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:15.277041   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:15.300819   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:15.300843   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:15.300849   46108 cri.go:89] found id: ""
	I0823 19:05:15.300857   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:15.300916   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.306646   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.311576   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:15.311645   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:15.331413   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:15.331440   46108 cri.go:89] found id: ""
	I0823 19:05:15.331450   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:15.331506   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.336009   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:15.336080   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:15.359492   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:15.359517   46108 cri.go:89] found id: ""
	I0823 19:05:15.359525   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:15.359582   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.363943   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:15.364004   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:15.382026   46108 cri.go:89] found id: ""
	I0823 19:05:15.382059   46108 logs.go:284] 0 containers: []
	W0823 19:05:15.382068   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:15.382076   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:15.382144   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:15.404262   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:15.404285   46108 cri.go:89] found id: ""
	I0823 19:05:15.404293   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:15.404355   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:15.408577   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:15.408605   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:15.439760   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:15.439793   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:15.459344   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:15.459375   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:15.495240   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:15.495277   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:15.521891   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:15.521931   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:15.564880   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:15.564920   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:15.617801   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:15.617849   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:15.639318   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:15.639352   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:15.681687   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:15.681718   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:15.726114   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:15.726160   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:15.806872   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:15.806907   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:15.882726   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:15.882761   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:15.982318   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:15.982344   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:15.982354   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:18.507719   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:18.508353   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:18.508410   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:18.508466   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:18.526666   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:18.526688   46108 cri.go:89] found id: ""
	I0823 19:05:18.526696   46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:18.526746   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.531373   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:18.531429   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:18.550481   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:18.550510   46108 cri.go:89] found id: ""
	I0823 19:05:18.550522   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:18.550575   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.556364   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:18.556426   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:18.575797   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:18.575815   46108 cri.go:89] found id: ""
	I0823 19:05:18.575822   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:18.575862   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.579786   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:18.579859   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:18.599732   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:18.599755   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:18.599759   46108 cri.go:89] found id: ""
	I0823 19:05:18.599765   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:18.599808   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.604070   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.608517   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:18.608591   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:18.631581   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:18.631616   46108 cri.go:89] found id: ""
	I0823 19:05:18.631624   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:18.631684   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.636076   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:18.636142   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:18.651058   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:18.651076   46108 cri.go:89] found id: ""
	I0823 19:05:18.651084   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:18.651138   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.654657   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:18.654705   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:18.671712   46108 cri.go:89] found id: ""
	I0823 19:05:18.671740   46108 logs.go:284] 0 containers: []
	W0823 19:05:18.671751   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:18.671759   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:18.671812   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:18.692729   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:18.692753   46108 cri.go:89] found id: ""
	I0823 19:05:18.692762   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:18.692811   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:18.697295   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:18.697314   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:18.719318   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:18.719346   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:18.739487   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:18.739514   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:18.761602   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:18.761635   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:18.798623   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:18.798654   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:18.870646   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:18.870689   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:18.895869   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:18.895902   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:18.976150   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:18.976189   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:18.987961   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:18.987989   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:19.081616   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:19.081644   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:19.081654   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:19.100113   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:19.100151   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:19.142367   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:19.142405   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:19.186469   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:19.186509   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:21.709446   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:26.710599   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:05:26.710673   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:26.710732   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:26.729493   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:26.729517   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:26.729523   46108 cri.go:89] found id: ""
	I0823 19:05:26.729531   46108 logs.go:284] 2 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:26.729593   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.734154   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.738569   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:26.738622   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:26.756621   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:26.756640   46108 cri.go:89] found id: ""
	I0823 19:05:26.756649   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:26.756704   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.761233   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:26.761289   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:26.781902   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:26.781929   46108 cri.go:89] found id: ""
	I0823 19:05:26.781939   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:26.781997   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.790699   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:26.790749   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:26.813784   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:26.813811   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:26.813818   46108 cri.go:89] found id: ""
	I0823 19:05:26.813827   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:26.813877   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.818490   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.823145   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:26.823202   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:26.845567   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:26.845592   46108 cri.go:89] found id: ""
	I0823 19:05:26.845601   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:26.845655   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.850360   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:26.850426   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:26.870395   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:26.870419   46108 cri.go:89] found id: ""
	I0823 19:05:26.870428   46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:26.870475   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.876101   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:26.876167   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:26.899479   46108 cri.go:89] found id: ""
	I0823 19:05:26.899504   46108 logs.go:284] 0 containers: []
	W0823 19:05:26.899515   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:26.899523   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:26.899589   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:26.927928   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:26.927952   46108 cri.go:89] found id: ""
	I0823 19:05:26.927970   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:26.928027   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:26.933943   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:26.933972   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:27.021878   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:27.021911   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0823 19:05:37.143963   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.122030093s)
	W0823 19:05:37.144006   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0823 19:05:37.144019   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:37.144031   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:37.169949   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:37.169989   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:37.206167   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:37.206202   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:37.225475   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:37.225503   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:37.239639   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:37.239673   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:37.261955   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:37.261991   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:37.278927   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:37.278955   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:37.309040   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:37.309068   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:37.334854   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:37.334892   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:37.362211   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:37.362245   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:37.395147   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:37.395178   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:37.461867   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:37.461902   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:40.005020   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:41.846446   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:56544->192.168.61.47:8443: read: connection reset by peer
	I0823 19:05:41.846514   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:41.846577   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:41.866322   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:41.866353   46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:41.866360   46108 cri.go:89] found id: ""
	I0823 19:05:41.866369   46108 logs.go:284] 2 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
	I0823 19:05:41.866451   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.870940   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.875236   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:41.875303   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:41.894877   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:41.894903   46108 cri.go:89] found id: ""
	I0823 19:05:41.894911   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:41.894962   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.903269   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:41.903332   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:41.927076   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:41.927096   46108 cri.go:89] found id: ""
	I0823 19:05:41.927103   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:41.927146   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.933333   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:41.933406   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:41.951576   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:41.951601   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:41.951607   46108 cri.go:89] found id: ""
	I0823 19:05:41.951615   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:41.951674   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.958235   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.963263   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:41.963326   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:41.981994   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:41.982019   46108 cri.go:89] found id: ""
	I0823 19:05:41.982026   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:41.982081   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:41.986871   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:41.986931   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:42.004018   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:42.004036   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:42.004040   46108 cri.go:89] found id: ""
	I0823 19:05:42.004045   46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:42.004110   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:42.008132   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:42.011951   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:42.011996   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:42.031705   46108 cri.go:89] found id: ""
	I0823 19:05:42.031725   46108 logs.go:284] 0 containers: []
	W0823 19:05:42.031735   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:42.031743   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:42.031805   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:42.050488   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:42.050510   46108 cri.go:89] found id: ""
	I0823 19:05:42.050519   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:42.050573   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:42.054572   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:42.054592   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:42.065667   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:42.065697   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:42.145190   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:42.145220   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:42.145234   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:42.165642   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:42.165670   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:42.189613   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:42.189645   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:42.211684   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:42.211711   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:42.271145   46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
	I0823 19:05:42.271182   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
	I0823 19:05:42.290971   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:42.290999   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:42.310571   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:42.310597   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:42.328444   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:42.328475   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:42.362660   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:42.362692   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:42.392622   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:42.392649   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:42.455590   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:42.455621   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:42.481796   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:42.481826   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:42.498843   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:42.498871   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:45.032605   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:45.033208   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:45.033258   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:45.033302   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:45.050497   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:45.050522   46108 cri.go:89] found id: ""
	I0823 19:05:45.050531   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:05:45.050593   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.055383   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:45.055444   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:45.077342   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:45.077366   46108 cri.go:89] found id: ""
	I0823 19:05:45.077373   46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:45.077426   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.082934   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:45.083006   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:45.102586   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:45.102610   46108 cri.go:89] found id: ""
	I0823 19:05:45.102619   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:45.102677   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.106802   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:45.106882   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:45.125732   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:45.125759   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:45.125765   46108 cri.go:89] found id: ""
	I0823 19:05:45.125774   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:45.125831   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.130533   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.136227   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:45.136289   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:45.155736   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:45.155762   46108 cri.go:89] found id: ""
	I0823 19:05:45.155769   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:45.155822   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.160563   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:45.160635   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:45.177406   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:45.177433   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:45.177440   46108 cri.go:89] found id: ""
	I0823 19:05:45.177448   46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:45.177506   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.182054   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.186013   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:45.186084   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:45.202267   46108 cri.go:89] found id: ""
	I0823 19:05:45.202294   46108 logs.go:284] 0 containers: []
	W0823 19:05:45.202308   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:45.202316   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:45.202378   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:45.223935   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:45.223954   46108 cri.go:89] found id: ""
	I0823 19:05:45.223960   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:45.224013   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:45.232380   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:45.232413   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:45.284220   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:45.284256   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:45.298376   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:45.298404   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:45.328296   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:45.328344   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:45.354436   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:45.354471   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:45.388543   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:45.388578   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:45.407329   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:45.407364   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:45.504343   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:45.504374   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:45.547849   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:45.547884   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:45.570491   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:45.570519   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:45.605456   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:45.605487   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:45.633417   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:45.633445   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:45.706675   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:45.706713   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:45.797573   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:45.797598   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:45.797609   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:48.321562   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:48.322150   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:48.322203   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:48.322261   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:48.339493   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:48.339517   46108 cri.go:89] found id: ""
	I0823 19:05:48.339527   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:05:48.339585   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.343895   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:48.343962   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:48.373419   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:48.373445   46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	I0823 19:05:48.373452   46108 cri.go:89] found id: ""
	I0823 19:05:48.373462   46108 logs.go:284] 2 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
	I0823 19:05:48.373521   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.377952   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.383096   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:48.383167   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:48.398715   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:48.398736   46108 cri.go:89] found id: ""
	I0823 19:05:48.398744   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:48.398813   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.402949   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:48.403013   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:48.426893   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:48.426917   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:48.426923   46108 cri.go:89] found id: ""
	I0823 19:05:48.426932   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:48.426991   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.431665   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.435748   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:48.435810   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:48.452955   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:48.452974   46108 cri.go:89] found id: ""
	I0823 19:05:48.452981   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:48.453020   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.457345   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:48.457412   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:48.477455   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:48.477476   46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:48.477482   46108 cri.go:89] found id: ""
	I0823 19:05:48.477491   46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
	I0823 19:05:48.477559   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.482041   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.486974   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:48.487028   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:48.507380   46108 cri.go:89] found id: ""
	I0823 19:05:48.507406   46108 logs.go:284] 0 containers: []
	W0823 19:05:48.507417   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:48.507425   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:48.507496   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:48.525464   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:48.525490   46108 cri.go:89] found id: ""
	I0823 19:05:48.525500   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:48.525577   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:48.529762   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:48.529790   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:48.621352   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:48.621384   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:48.621399   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:48.656553   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:05:48.656584   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:48.674634   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:48.674665   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:48.691778   46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
	I0823 19:05:48.691812   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
	I0823 19:05:48.728246   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:48.728279   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:48.752383   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:48.752413   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:48.775863   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:48.775896   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:48.807936   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:48.807976   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:48.869466   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:48.869500   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:48.890400   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:48.890430   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:48.952391   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:48.952428   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:48.963271   46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
	I0823 19:05:48.963290   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
	W0823 19:05:48.980707   46108 logs.go:130] failed etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:05:48.976612    8890 remote_runtime.go:329] ContainerStatus "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": not found
	time="2023-08-23T19:05:48Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d\": not found"
	 output: 
	** stderr ** 
	E0823 19:05:48.976612    8890 remote_runtime.go:329] ContainerStatus "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": not found
	time="2023-08-23T19:05:48Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d\": not found"
	
	** /stderr **
	I0823 19:05:48.980741   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:48.980754   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:49.017331   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:49.017367   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:51.536443   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:51.537122   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:51.537181   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:51.537238   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:51.555402   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:51.555434   46108 cri.go:89] found id: ""
	I0823 19:05:51.555441   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:05:51.555494   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.559708   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:51.559780   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:51.582970   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:51.582994   46108 cri.go:89] found id: ""
	I0823 19:05:51.583002   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:05:51.583060   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.587385   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:51.587451   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:51.606721   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:51.606749   46108 cri.go:89] found id: ""
	I0823 19:05:51.606758   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:51.606817   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.611199   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:51.611279   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:51.629690   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:51.629711   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:51.629715   46108 cri.go:89] found id: ""
	I0823 19:05:51.629721   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:51.629781   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.635016   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.639061   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:51.639127   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:51.656536   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:51.656561   46108 cri.go:89] found id: ""
	I0823 19:05:51.656569   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:51.656622   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.660991   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:51.661060   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:51.677675   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:51.677699   46108 cri.go:89] found id: ""
	I0823 19:05:51.677707   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:05:51.677763   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.682316   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:51.682381   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:51.698101   46108 cri.go:89] found id: ""
	I0823 19:05:51.698128   46108 logs.go:284] 0 containers: []
	W0823 19:05:51.698138   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:51.698145   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:51.698198   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:51.717967   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:51.717992   46108 cri.go:89] found id: ""
	I0823 19:05:51.718000   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:51.718059   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:51.724446   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:51.724469   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:51.736002   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:51.736028   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:51.822206   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:51.822233   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:51.822252   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:51.851889   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:51.851921   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:51.870203   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:51.870227   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:51.937335   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:51.937365   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:51.975724   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:51.975760   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:52.003943   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:52.003970   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:52.062343   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:52.062376   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:52.085086   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:05:52.085115   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:52.101871   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:52.101898   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:52.142589   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:52.142615   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:52.167852   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:52.167888   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:54.703424   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:54.704054   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:54.704118   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:54.704180   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:54.728659   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:54.728684   46108 cri.go:89] found id: ""
	I0823 19:05:54.728693   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:05:54.728797   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.735292   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:54.735361   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:54.754779   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:54.754806   46108 cri.go:89] found id: ""
	I0823 19:05:54.754816   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:05:54.754878   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.759465   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:54.759520   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:54.788532   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:54.788556   46108 cri.go:89] found id: ""
	I0823 19:05:54.788566   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:54.788621   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.794260   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:54.794329   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:54.820790   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:54.820819   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:54.820831   46108 cri.go:89] found id: ""
	I0823 19:05:54.820840   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:54.820895   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.827024   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.833001   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:54.833093   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:54.856210   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:54.856234   46108 cri.go:89] found id: ""
	I0823 19:05:54.856243   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:54.856298   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.861399   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:54.861456   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:54.883432   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:54.883459   46108 cri.go:89] found id: ""
	I0823 19:05:54.883468   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:05:54.883529   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.889339   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:54.889425   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:54.909342   46108 cri.go:89] found id: ""
	I0823 19:05:54.909374   46108 logs.go:284] 0 containers: []
	W0823 19:05:54.909385   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:54.909392   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:54.909454   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:54.934585   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:54.934608   46108 cri.go:89] found id: ""
	I0823 19:05:54.934616   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:54.934686   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:54.939394   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:54.939421   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:55.010420   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:55.010452   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:55.024763   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:55.024800   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:55.051548   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:55.051577   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:55.089386   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:55.089425   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:55.182639   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:55.182676   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:55.215920   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:55.215970   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:55.243850   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:55.243890   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:55.365367   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:55.365394   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:55.365409   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:55.391596   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:05:55.391634   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:55.413706   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:55.413737   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:55.461567   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:55.461599   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:55.505222   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:55.505254   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:58.049534   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:05:58.050212   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:05:58.050263   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:05:58.050318   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:05:58.069075   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:58.069103   46108 cri.go:89] found id: ""
	I0823 19:05:58.069112   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:05:58.069172   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.073772   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:05:58.073840   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:05:58.090025   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:58.090050   46108 cri.go:89] found id: ""
	I0823 19:05:58.090058   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:05:58.090113   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.094442   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:05:58.094511   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:05:58.119228   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:58.119250   46108 cri.go:89] found id: ""
	I0823 19:05:58.119258   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:05:58.119310   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.125716   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:05:58.125789   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:05:58.146238   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:58.146276   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:58.146289   46108 cri.go:89] found id: ""
	I0823 19:05:58.146297   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:05:58.146353   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.152091   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.157411   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:05:58.157483   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:05:58.174659   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:05:58.174692   46108 cri.go:89] found id: ""
	I0823 19:05:58.174702   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:05:58.174760   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.179755   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:05:58.179830   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:05:58.205206   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:58.205227   46108 cri.go:89] found id: ""
	I0823 19:05:58.205234   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:05:58.205285   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.211124   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:05:58.211201   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:05:58.230651   46108 cri.go:89] found id: ""
	I0823 19:05:58.230692   46108 logs.go:284] 0 containers: []
	W0823 19:05:58.230703   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:05:58.230720   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:05:58.230786   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:05:58.255736   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:58.255764   46108 cri.go:89] found id: ""
	I0823 19:05:58.255773   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:05:58.255835   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:05:58.260228   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:05:58.260256   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:05:58.290658   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:05:58.290703   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:05:58.313231   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:05:58.313266   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:05:58.405561   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:05:58.405600   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:05:58.494309   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:05:58.494337   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:05:58.494350   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:05:58.525794   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:05:58.525832   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:05:58.552023   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:05:58.552057   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:05:58.599056   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:05:58.599101   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:05:58.640112   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:05:58.640147   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:05:58.673647   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:05:58.673675   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:05:58.745546   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:05:58.745581   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:05:58.758054   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:05:58.758092   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:05:58.781280   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:05:58.781316   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:01.330698   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:01.331395   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:01.331452   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:01.331512   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:01.352439   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:01.352456   46108 cri.go:89] found id: ""
	I0823 19:06:01.352464   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:01.352505   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.356431   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:01.356489   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:01.372362   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:01.372381   46108 cri.go:89] found id: ""
	I0823 19:06:01.372390   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:01.372449   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.376304   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:01.376377   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:01.393905   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:01.393933   46108 cri.go:89] found id: ""
	I0823 19:06:01.393942   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:01.394001   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.398219   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:01.398306   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:01.417133   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:01.417154   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:01.417158   46108 cri.go:89] found id: ""
	I0823 19:06:01.417165   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:01.417218   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.422147   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.426098   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:01.426165   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:01.443501   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:01.443526   46108 cri.go:89] found id: ""
	I0823 19:06:01.443536   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:01.443600   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.447775   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:01.447845   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:01.464437   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:01.464465   46108 cri.go:89] found id: ""
	I0823 19:06:01.464474   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:01.464531   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.468649   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:01.468732   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:01.485157   46108 cri.go:89] found id: ""
	I0823 19:06:01.485183   46108 logs.go:284] 0 containers: []
	W0823 19:06:01.485194   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:01.485202   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:01.485263   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:01.502362   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:01.502389   46108 cri.go:89] found id: ""
	I0823 19:06:01.502411   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:01.502468   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:01.507271   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:01.507353   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:01.535669   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:01.535698   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:01.558708   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:01.558740   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:01.591352   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:01.591377   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:01.666519   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:01.666556   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:01.692114   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:01.692147   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:01.717823   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:01.717853   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:01.763825   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:01.763858   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:01.796442   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:01.796489   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:01.830332   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:01.830363   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:01.897377   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:01.897412   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:01.909533   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:01.909570   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:01.991558   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:01.991587   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:01.991606   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:04.509848   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:04.510506   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:04.510558   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:04.510621   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:04.529340   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:04.529366   46108 cri.go:89] found id: ""
	I0823 19:06:04.529375   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:04.529427   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.535732   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:04.535803   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:04.553995   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:04.554022   46108 cri.go:89] found id: ""
	I0823 19:06:04.554029   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:04.554076   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.557737   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:04.557817   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:04.573913   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:04.573938   46108 cri.go:89] found id: ""
	I0823 19:06:04.573946   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:04.573998   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.577667   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:04.577724   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:04.596844   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:04.596866   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:04.596871   46108 cri.go:89] found id: ""
	I0823 19:06:04.596880   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:04.596926   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.600759   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.605475   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:04.605551   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:04.625011   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:04.625035   46108 cri.go:89] found id: ""
	I0823 19:06:04.625041   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:04.625083   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.633869   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:04.633934   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:04.654593   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:04.654616   46108 cri.go:89] found id: ""
	I0823 19:06:04.654624   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:04.654682   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.658856   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:04.658924   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:04.678983   46108 cri.go:89] found id: ""
	I0823 19:06:04.679004   46108 logs.go:284] 0 containers: []
	W0823 19:06:04.679011   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:04.679017   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:04.679066   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:04.696276   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:04.696297   46108 cri.go:89] found id: ""
	I0823 19:06:04.696306   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:04.696361   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:04.700244   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:04.700270   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:04.719858   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:04.719890   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:04.788211   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:04.788247   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:04.800580   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:04.800611   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:04.885821   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:04.885850   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:04.885863   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:04.908380   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:04.908407   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:04.962523   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:04.962565   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:04.998637   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:04.998684   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:05.020839   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:05.020883   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:05.094884   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:05.094918   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:05.111421   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:05.111452   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:05.131504   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:05.131542   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:05.153371   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:05.153402   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:07.692356   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:07.693035   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:07.693094   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:07.693149   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:07.711744   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:07.711761   46108 cri.go:89] found id: ""
	I0823 19:06:07.711768   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:07.711818   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.716262   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:07.716321   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:07.741478   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:07.741503   46108 cri.go:89] found id: ""
	I0823 19:06:07.741512   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:07.741575   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.748187   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:07.748259   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:07.769321   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:07.769348   46108 cri.go:89] found id: ""
	I0823 19:06:07.769357   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:07.769402   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.774609   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:07.774680   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:07.795679   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:07.795706   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:07.795713   46108 cri.go:89] found id: ""
	I0823 19:06:07.795721   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:07.795777   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.800827   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.805586   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:07.805649   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:07.825329   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:07.825357   46108 cri.go:89] found id: ""
	I0823 19:06:07.825366   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:07.825422   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.829581   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:07.829642   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:07.846776   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:07.846801   46108 cri.go:89] found id: ""
	I0823 19:06:07.846810   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:07.846868   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.851255   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:07.851315   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:07.867535   46108 cri.go:89] found id: ""
	I0823 19:06:07.867560   46108 logs.go:284] 0 containers: []
	W0823 19:06:07.867574   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:07.867582   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:07.867640   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:07.884538   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:07.884563   46108 cri.go:89] found id: ""
	I0823 19:06:07.884573   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:07.884635   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:07.889386   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:07.889415   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:07.919315   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:07.919343   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:08.009636   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:08.009670   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:08.099086   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:08.099113   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:08.099131   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:08.117038   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:08.117071   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:08.164356   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:08.164394   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:08.208857   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:08.208902   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:08.243681   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:08.243712   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:08.262818   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:08.262852   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:08.328349   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:08.328391   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:08.340217   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:08.340243   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:08.362864   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:08.362896   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:08.386884   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:08.386910   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:10.909996   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:10.910647   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:10.910700   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:10.910764   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:10.932264   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:10.932291   46108 cri.go:89] found id: ""
	I0823 19:06:10.932299   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:10.932357   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:10.937135   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:10.937207   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:10.966219   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:10.966253   46108 cri.go:89] found id: ""
	I0823 19:06:10.966263   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:10.966318   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:10.971184   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:10.971266   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:10.994135   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:10.994160   46108 cri.go:89] found id: ""
	I0823 19:06:10.994168   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:10.994228   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:10.999215   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:10.999284   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:11.017716   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:11.017739   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:11.017744   46108 cri.go:89] found id: ""
	I0823 19:06:11.017752   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:11.017815   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:11.022288   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:11.026307   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:11.026379   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:11.047035   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:11.047061   46108 cri.go:89] found id: ""
	I0823 19:06:11.047068   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:11.047120   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:11.051341   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:11.051421   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:11.071688   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:11.071715   46108 cri.go:89] found id: ""
	I0823 19:06:11.071724   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:11.071782   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:11.075930   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:11.076007   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:11.092653   46108 cri.go:89] found id: ""
	I0823 19:06:11.092679   46108 logs.go:284] 0 containers: []
	W0823 19:06:11.092689   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:11.092697   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:11.092764   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:11.112201   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:11.112230   46108 cri.go:89] found id: ""
	I0823 19:06:11.112240   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:11.112307   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:11.116802   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:11.116831   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:11.136593   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:11.136618   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:11.211132   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:11.211166   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:11.222746   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:11.222775   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:11.303168   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:11.303188   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:11.303199   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:11.319114   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:11.319141   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:11.345675   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:11.345702   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:11.371184   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:11.371212   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:11.406205   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:11.406240   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:11.475694   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:11.475735   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:11.503636   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:11.503666   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:11.523150   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:11.523180   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:11.562051   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:11.562090   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:14.099505   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:14.100215   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:14.100261   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:14.100309   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:14.125582   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:14.125611   46108 cri.go:89] found id: ""
	I0823 19:06:14.125621   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:14.125678   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.131327   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:14.131408   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:14.154601   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:14.154628   46108 cri.go:89] found id: ""
	I0823 19:06:14.154635   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:14.154701   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.159514   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:14.159603   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:14.178540   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:14.178565   46108 cri.go:89] found id: ""
	I0823 19:06:14.178573   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:14.178630   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.182950   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:14.183018   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:14.199646   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:14.199673   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:14.199677   46108 cri.go:89] found id: ""
	I0823 19:06:14.199684   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:14.199735   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.204477   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.208343   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:14.208397   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:14.228214   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:14.228243   46108 cri.go:89] found id: ""
	I0823 19:06:14.228251   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:14.228305   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.233399   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:14.233471   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:14.250578   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:14.250607   46108 cri.go:89] found id: ""
	I0823 19:06:14.250616   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:14.250675   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.254830   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:14.254904   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:14.282730   46108 cri.go:89] found id: ""
	I0823 19:06:14.282757   46108 logs.go:284] 0 containers: []
	W0823 19:06:14.282774   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:14.282780   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:14.282838   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:14.300293   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:14.300321   46108 cri.go:89] found id: ""
	I0823 19:06:14.300329   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:14.300386   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:14.304543   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:14.304571   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:14.350718   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:14.350752   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:14.367639   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:14.367673   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:14.435343   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:14.435382   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:14.447785   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:14.447815   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:14.532345   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:14.532378   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:14.532393   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:14.551690   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:14.551722   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:14.591905   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:14.591933   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:14.664723   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:14.664759   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:14.689075   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:14.689103   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:14.713101   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:14.713143   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:14.745713   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:14.745750   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:14.773981   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:14.774022   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:17.312146   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:17.312806   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:17.312880   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:17.312938   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:17.330864   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:17.330944   46108 cri.go:89] found id: ""
	I0823 19:06:17.330969   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:17.331059   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.335582   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:17.335638   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:17.353605   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:17.353624   46108 cri.go:89] found id: ""
	I0823 19:06:17.353631   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:17.353675   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.357497   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:17.357577   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:17.377588   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:17.377626   46108 cri.go:89] found id: ""
	I0823 19:06:17.377636   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:17.377696   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.382099   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:17.382161   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:17.401289   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:17.401312   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:17.401319   46108 cri.go:89] found id: ""
	I0823 19:06:17.401327   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:17.401383   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.405299   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.409182   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:17.409248   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:17.427439   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:17.427459   46108 cri.go:89] found id: ""
	I0823 19:06:17.427469   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:17.427519   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.431764   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:17.431821   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:17.448373   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:17.448399   46108 cri.go:89] found id: ""
	I0823 19:06:17.448416   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:17.448476   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.452429   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:17.452481   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:17.469717   46108 cri.go:89] found id: ""
	I0823 19:06:17.469740   46108 logs.go:284] 0 containers: []
	W0823 19:06:17.469747   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:17.469753   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:17.469805   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:17.486112   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:17.486137   46108 cri.go:89] found id: ""
	I0823 19:06:17.486145   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:17.486204   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:17.489962   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:17.489991   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:17.563739   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:17.563776   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:17.574269   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:17.574299   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:17.596535   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:17.596564   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:17.617233   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:17.617267   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:17.647718   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:17.647750   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:17.687097   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:17.687135   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:17.722196   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:17.722230   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:17.742181   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:17.742207   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:17.808340   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:17.808379   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:17.894472   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:17.894494   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:17.894507   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:17.916051   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:17.916080   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:17.953461   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:17.953502   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:20.478759   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:20.479429   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:20.479472   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:20.479517   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:20.500015   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:20.500052   46108 cri.go:89] found id: ""
	I0823 19:06:20.500061   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:20.500115   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.504199   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:20.504272   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:20.521497   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:20.521523   46108 cri.go:89] found id: ""
	I0823 19:06:20.521531   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:20.521602   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.526129   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:20.526194   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:20.554028   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:20.554055   46108 cri.go:89] found id: ""
	I0823 19:06:20.554064   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:20.554125   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.558290   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:20.558366   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:20.576745   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:20.576771   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:20.576775   46108 cri.go:89] found id: ""
	I0823 19:06:20.576781   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:20.576835   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.581785   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.585852   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:20.585923   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:20.603803   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:20.603827   46108 cri.go:89] found id: ""
	I0823 19:06:20.603834   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:20.603895   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.607978   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:20.608048   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:20.627666   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:20.627686   46108 cri.go:89] found id: ""
	I0823 19:06:20.627694   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:20.627737   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.632181   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:20.632238   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:20.650205   46108 cri.go:89] found id: ""
	I0823 19:06:20.650230   46108 logs.go:284] 0 containers: []
	W0823 19:06:20.650240   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:20.650251   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:20.650308   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:20.668478   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:20.668500   46108 cri.go:89] found id: ""
	I0823 19:06:20.668509   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:20.668562   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:20.673326   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:20.673354   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:20.714754   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:20.714789   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:20.748997   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:20.749028   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:20.766798   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:20.766822   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:20.837409   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:20.837447   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:20.866229   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:20.866255   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:20.935944   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:20.935992   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:21.025154   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:21.025185   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:21.025200   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:21.058400   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:21.058433   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:21.084037   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:21.084070   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:21.122780   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:21.122812   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:21.134005   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:21.134036   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:21.153320   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:21.153349   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:23.670983   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:23.671729   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:23.671787   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:23.671839   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:23.690300   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:23.690324   46108 cri.go:89] found id: ""
	I0823 19:06:23.690333   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:23.690391   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.695769   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:23.695840   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:23.713653   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:23.713679   46108 cri.go:89] found id: ""
	I0823 19:06:23.713687   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:23.713739   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.717980   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:23.718047   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:23.742293   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:23.742318   46108 cri.go:89] found id: ""
	I0823 19:06:23.742327   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:23.742382   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.746637   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:23.746688   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:23.764545   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:23.764564   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:23.764570   46108 cri.go:89] found id: ""
	I0823 19:06:23.764578   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:23.764635   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.769385   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.773582   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:23.773644   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:23.789972   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:23.789991   46108 cri.go:89] found id: ""
	I0823 19:06:23.789997   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:23.790041   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.794732   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:23.794841   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:23.813335   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:23.813358   46108 cri.go:89] found id: ""
	I0823 19:06:23.813367   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:23.813424   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.817918   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:23.817992   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:23.836337   46108 cri.go:89] found id: ""
	I0823 19:06:23.836365   46108 logs.go:284] 0 containers: []
	W0823 19:06:23.836375   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:23.836383   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:23.836452   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:23.854760   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:23.854782   46108 cri.go:89] found id: ""
	I0823 19:06:23.854791   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:23.854849   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:23.859227   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:23.859249   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:23.893656   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:23.893688   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:23.909502   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:23.909536   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:23.978095   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:23.978132   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:24.048140   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:24.048178   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:24.061139   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:24.061169   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:24.121262   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:24.121309   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:24.147113   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:24.147144   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:24.170655   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:24.170688   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:24.203655   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:24.203687   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:24.232782   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:24.232815   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:24.328560   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:24.328591   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:24.328606   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:24.349916   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:24.349939   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:26.868754   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:26.869392   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:26.869451   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:26.869512   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:26.889223   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:26.889246   46108 cri.go:89] found id: ""
	I0823 19:06:26.889256   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:26.889305   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.893591   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:26.893668   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:26.913170   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:26.913197   46108 cri.go:89] found id: ""
	I0823 19:06:26.913205   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:26.913275   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.917480   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:26.917556   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:26.936063   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:26.936087   46108 cri.go:89] found id: ""
	I0823 19:06:26.936093   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:26.936143   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.940882   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:26.940958   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:26.958927   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:26.958950   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:26.958956   46108 cri.go:89] found id: ""
	I0823 19:06:26.958964   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:26.959019   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.963573   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.967483   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:26.967540   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:26.984382   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:26.984402   46108 cri.go:89] found id: ""
	I0823 19:06:26.984410   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:26.984465   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:26.989408   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:26.989474   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:27.006689   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:27.006707   46108 cri.go:89] found id: ""
	I0823 19:06:27.006715   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:27.006767   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:27.011886   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:27.011947   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:27.036215   46108 cri.go:89] found id: ""
	I0823 19:06:27.036249   46108 logs.go:284] 0 containers: []
	W0823 19:06:27.036263   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:27.036272   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:27.036337   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:27.064621   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:27.064644   46108 cri.go:89] found id: ""
	I0823 19:06:27.064653   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:27.064708   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:27.070401   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:27.070427   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:27.134688   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:27.134723   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:27.147350   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:27.147375   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:27.195360   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:27.195395   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:27.277900   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:27.277940   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:27.315975   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:27.316010   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:27.338544   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:27.338593   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:27.432654   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:27.432685   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:27.432700   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:27.460779   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:27.460815   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:27.488452   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:27.488490   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:27.517308   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:27.517346   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:27.578386   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:27.578438   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:27.609893   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:27.609932   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:30.155181   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:30.155920   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:30.155967   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:30.156024   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:30.180694   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:30.180718   46108 cri.go:89] found id: ""
	I0823 19:06:30.180724   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:30.180783   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.186267   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:30.186347   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:30.217747   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:30.217779   46108 cri.go:89] found id: ""
	I0823 19:06:30.217788   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:30.217848   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.223522   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:30.223599   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:30.246882   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:30.246908   46108 cri.go:89] found id: ""
	I0823 19:06:30.246917   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:30.246974   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.251123   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:30.251187   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:30.269111   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:30.269137   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:30.269143   46108 cri.go:89] found id: ""
	I0823 19:06:30.269151   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:30.269211   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.273823   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.278377   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:30.278432   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:30.297232   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:30.297254   46108 cri.go:89] found id: ""
	I0823 19:06:30.297262   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:30.297314   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.301894   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:30.301969   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:30.320093   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:30.320116   46108 cri.go:89] found id: ""
	I0823 19:06:30.320124   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:30.320185   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.324639   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:30.324705   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:30.343752   46108 cri.go:89] found id: ""
	I0823 19:06:30.343779   46108 logs.go:284] 0 containers: []
	W0823 19:06:30.343789   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:30.343796   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:30.343859   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:30.364451   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:30.364475   46108 cri.go:89] found id: ""
	I0823 19:06:30.364484   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:30.364544   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:30.369280   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:30.369304   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:30.430949   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:30.430984   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:30.441745   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:30.441783   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:30.537527   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:30.537569   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:30.537588   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:30.562492   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:30.562522   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:30.596878   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:30.596912   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:30.662071   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:30.662106   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:30.691365   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:30.691405   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:30.720807   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:30.720842   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:30.744868   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:30.744895   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:30.790120   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:30.790157   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:30.829824   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:30.829859   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:30.861421   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:30.861453   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:33.380925   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:33.381642   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:33.381692   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:33.381751   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:33.400140   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:33.400159   46108 cri.go:89] found id: ""
	I0823 19:06:33.400165   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:33.400209   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.403915   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:33.403980   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:33.420690   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:33.420715   46108 cri.go:89] found id: ""
	I0823 19:06:33.420723   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:33.420777   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.425119   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:33.425166   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:33.442477   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:33.442500   46108 cri.go:89] found id: ""
	I0823 19:06:33.442507   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:33.442549   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.446734   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:33.446794   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:33.462854   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:33.462876   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:33.462883   46108 cri.go:89] found id: ""
	I0823 19:06:33.462891   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:33.462941   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.466806   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.471050   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:33.471112   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:33.486208   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:33.486232   46108 cri.go:89] found id: ""
	I0823 19:06:33.486240   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:33.486299   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.490066   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:33.490120   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:33.507910   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:33.507930   46108 cri.go:89] found id: ""
	I0823 19:06:33.507939   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:33.508000   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.512488   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:33.512548   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:33.528403   46108 cri.go:89] found id: ""
	I0823 19:06:33.528422   46108 logs.go:284] 0 containers: []
	W0823 19:06:33.528429   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:33.528435   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:33.528489   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:33.548477   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:33.548496   46108 cri.go:89] found id: ""
	I0823 19:06:33.548503   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:33.548563   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:33.552606   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:33.552630   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:33.571960   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:33.571992   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:33.597784   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:33.597809   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:33.658944   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:33.658980   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:33.737079   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:33.737109   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:33.737124   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:33.758694   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:33.758719   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:33.805837   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:33.805887   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:33.833491   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:33.833522   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:33.868896   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:33.868933   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:33.905173   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:33.905205   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:33.924286   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:33.924315   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:33.935275   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:33.935301   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:33.961618   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:33.961646   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:36.531780   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:36.532804   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:36.532862   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:36.532920   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:36.556343   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:36.556364   46108 cri.go:89] found id: ""
	I0823 19:06:36.556370   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:36.556418   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.560684   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:36.560749   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:36.581614   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:36.581636   46108 cri.go:89] found id: ""
	I0823 19:06:36.581644   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:36.581693   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.586179   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:36.586264   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:36.602636   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:36.602668   46108 cri.go:89] found id: ""
	I0823 19:06:36.602675   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:36.602736   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.607744   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:36.607810   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:36.623919   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:36.623942   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:36.623946   46108 cri.go:89] found id: ""
	I0823 19:06:36.623952   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:36.624009   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.628395   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.633650   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:36.633709   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:36.654155   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:36.654181   46108 cri.go:89] found id: ""
	I0823 19:06:36.654190   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:36.654240   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.658880   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:36.658946   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:36.678034   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:36.678061   46108 cri.go:89] found id: ""
	I0823 19:06:36.678067   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:36.678126   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.683815   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:36.683902   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:36.702404   46108 cri.go:89] found id: ""
	I0823 19:06:36.702425   46108 logs.go:284] 0 containers: []
	W0823 19:06:36.702432   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:36.702438   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:36.702485   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:36.723009   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:36.723034   46108 cri.go:89] found id: ""
	I0823 19:06:36.723043   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:36.723096   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:36.727531   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:36.727555   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:36.752161   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:36.752197   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:36.777024   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:36.777052   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:36.823091   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:36.823122   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:36.847267   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:36.847294   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:36.878818   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:36.878854   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:36.897474   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:36.897507   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:36.911710   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:36.911741   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:37.000069   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:37.000098   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:37.000117   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:37.020933   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:37.020959   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:37.074200   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:37.074234   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:37.108333   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:37.108368   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:37.175592   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:37.175637   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:39.742320   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:39.742861   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:39.742909   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:39.742961   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:39.760356   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:39.760376   46108 cri.go:89] found id: ""
	I0823 19:06:39.760386   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:39.760436   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.766261   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:39.766340   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:39.783568   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:39.783590   46108 cri.go:89] found id: ""
	I0823 19:06:39.783597   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:39.783639   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.788058   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:39.788133   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:39.805009   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:39.805034   46108 cri.go:89] found id: ""
	I0823 19:06:39.805043   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:39.805100   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.808986   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:39.809050   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:39.825844   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:39.825862   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:39.825866   46108 cri.go:89] found id: ""
	I0823 19:06:39.825874   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:39.825928   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.830522   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.834781   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:39.834844   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:39.850941   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:39.850968   46108 cri.go:89] found id: ""
	I0823 19:06:39.850976   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:39.851034   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.855218   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:39.855296   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:39.871059   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:39.871079   46108 cri.go:89] found id: ""
	I0823 19:06:39.871085   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:39.871134   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.875001   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:39.875072   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:39.890351   46108 cri.go:89] found id: ""
	I0823 19:06:39.890376   46108 logs.go:284] 0 containers: []
	W0823 19:06:39.890383   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:39.890388   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:39.890444   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:39.906428   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:39.906449   46108 cri.go:89] found id: ""
	I0823 19:06:39.906456   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:39.906497   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:39.910526   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:39.910551   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:39.998329   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:39.998355   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:39.998376   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:40.024566   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:40.024594   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:40.051364   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:40.051397   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:40.068764   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:40.068788   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:40.108132   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:40.108167   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:40.142888   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:40.142920   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:40.171984   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:40.172015   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:40.239620   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:40.239659   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:40.301043   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:40.301076   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:40.311860   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:40.311885   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:40.327757   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:40.327786   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:40.353339   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:40.353370   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:42.876759   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:42.877471   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:42.877530   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:42.877607   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:42.894908   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:42.894929   46108 cri.go:89] found id: ""
	I0823 19:06:42.894936   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:42.894981   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.898972   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:42.899033   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:42.915001   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:42.915022   46108 cri.go:89] found id: ""
	I0823 19:06:42.915031   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:42.915101   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.919198   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:42.919256   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:42.935338   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:42.935361   46108 cri.go:89] found id: ""
	I0823 19:06:42.935370   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:42.935423   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.939486   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:42.939548   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:42.956010   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:42.956034   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:42.956040   46108 cri.go:89] found id: ""
	I0823 19:06:42.956048   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:42.956106   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.960464   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.964439   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:42.964493   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:42.982758   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:42.982782   46108 cri.go:89] found id: ""
	I0823 19:06:42.982791   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:42.982875   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:42.986919   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:42.986983   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:43.003491   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:43.003506   46108 cri.go:89] found id: ""
	I0823 19:06:43.003513   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:43.003554   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:43.007437   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:43.007488   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:43.025732   46108 cri.go:89] found id: ""
	I0823 19:06:43.025761   46108 logs.go:284] 0 containers: []
	W0823 19:06:43.025767   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:43.025775   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:43.025836   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:43.043934   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:43.043962   46108 cri.go:89] found id: ""
	I0823 19:06:43.043971   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:43.044028   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:43.048415   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:43.048439   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:43.105880   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:43.105917   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:43.116950   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:43.116979   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:43.138259   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:43.138287   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:43.168099   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:43.168132   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:43.235486   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:43.235522   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:43.258649   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:43.258689   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:43.338039   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:43.338062   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:43.338077   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:43.358272   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:43.358306   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:43.374342   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:43.374371   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:43.413191   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:43.413223   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:43.442937   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:43.442966   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:43.476287   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:43.476319   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:45.994498   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:45.995134   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:45.995194   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:06:45.995255   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:06:46.014234   46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:46.014255   46108 cri.go:89] found id: ""
	I0823 19:06:46.014262   46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
	I0823 19:06:46.014311   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.019587   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:06:46.019650   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:06:46.037930   46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:46.037954   46108 cri.go:89] found id: ""
	I0823 19:06:46.037962   46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
	I0823 19:06:46.038018   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.041902   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:06:46.041977   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:06:46.060288   46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:46.060307   46108 cri.go:89] found id: ""
	I0823 19:06:46.060314   46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
	I0823 19:06:46.060359   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.064538   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:06:46.064606   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:06:46.082325   46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:46.082353   46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:46.082361   46108 cri.go:89] found id: ""
	I0823 19:06:46.082369   46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
	I0823 19:06:46.082431   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.086528   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.090457   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:06:46.090530   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:06:46.109668   46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:46.109696   46108 cri.go:89] found id: ""
	I0823 19:06:46.109705   46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
	I0823 19:06:46.109758   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.115864   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:06:46.115919   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:06:46.132599   46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:46.132623   46108 cri.go:89] found id: ""
	I0823 19:06:46.132633   46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
	I0823 19:06:46.132689   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.137253   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:06:46.137312   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:06:46.157374   46108 cri.go:89] found id: ""
	I0823 19:06:46.157397   46108 logs.go:284] 0 containers: []
	W0823 19:06:46.157406   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:06:46.157412   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:06:46.157465   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:06:46.177625   46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:46.177647   46108 cri.go:89] found id: ""
	I0823 19:06:46.177656   46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
	I0823 19:06:46.177721   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:06:46.182247   46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
	I0823 19:06:46.182277   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
	I0823 19:06:46.206667   46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
	I0823 19:06:46.206705   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
	I0823 19:06:46.225287   46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
	I0823 19:06:46.225318   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
	I0823 19:06:46.263800   46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
	I0823 19:06:46.263831   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
	I0823 19:06:46.290163   46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
	I0823 19:06:46.290206   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
	I0823 19:06:46.327622   46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
	I0823 19:06:46.327658   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
	I0823 19:06:46.363651   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:06:46.363686   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:06:46.439243   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:06:46.439277   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:06:46.539662   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:06:46.539689   46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
	I0823 19:06:46.539705   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
	I0823 19:06:46.562748   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:06:46.562775   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:06:46.639069   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:06:46.639112   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:06:46.665438   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:06:46.665469   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:06:46.677472   46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
	I0823 19:06:46.677503   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
	I0823 19:06:49.233421   46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0823 19:06:49.234058   46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:06:49.234109   46108 kubeadm.go:640] restartCluster took 4m22.17753923s
	W0823 19:06:49.234163   46108 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0823 19:06:49.234189   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0823 19:06:50.979304   46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.745093239s)
	I0823 19:06:50.979376   46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 19:06:50.992214   46108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 19:06:51.000230   46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 19:06:51.010860   46108 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 19:06:51.010919   46108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 19:06:51.106704   46108 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0823 19:06:51.106752   46108 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 19:06:51.288628   46108 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 19:06:51.288761   46108 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 19:06:51.288882   46108 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 19:06:51.381515   46108 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 19:06:51.383380   46108 out.go:204]   - Generating certificates and keys ...
	I0823 19:06:51.383503   46108 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 19:06:51.383583   46108 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 19:06:51.383680   46108 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0823 19:06:51.383753   46108 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0823 19:06:51.384021   46108 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0823 19:06:51.384174   46108 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0823 19:06:51.384732   46108 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0823 19:06:51.385124   46108 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0823 19:06:51.385324   46108 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0823 19:06:51.385710   46108 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0823 19:06:51.385869   46108 kubeadm.go:322] [certs] Using the existing "sa" key
	I0823 19:06:51.385941   46108 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 19:06:51.789906   46108 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 19:06:52.240307   46108 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 19:06:52.844096   46108 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 19:06:53.069388   46108 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 19:06:53.085803   46108 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 19:06:53.087761   46108 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 19:06:53.088099   46108 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 19:06:53.265055   46108 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 19:06:53.266894   46108 out.go:204]   - Booting up control plane ...
	I0823 19:06:53.267033   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 19:06:53.271650   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 19:06:53.272560   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 19:06:53.273352   46108 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 19:06:53.275598   46108 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 19:07:33.277021   46108 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0823 19:10:53.280081   46108 kubeadm.go:322] 
	I0823 19:10:53.280158   46108 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0823 19:10:53.280201   46108 kubeadm.go:322] 		timed out waiting for the condition
	I0823 19:10:53.280218   46108 kubeadm.go:322] 
	I0823 19:10:53.280259   46108 kubeadm.go:322] 	This error is likely caused by:
	I0823 19:10:53.280297   46108 kubeadm.go:322] 		- The kubelet is not running
	I0823 19:10:53.280405   46108 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0823 19:10:53.280416   46108 kubeadm.go:322] 
	I0823 19:10:53.280541   46108 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0823 19:10:53.280588   46108 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0823 19:10:53.280646   46108 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0823 19:10:53.280669   46108 kubeadm.go:322] 
	I0823 19:10:53.280819   46108 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0823 19:10:53.280945   46108 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0823 19:10:53.280956   46108 kubeadm.go:322] 
	I0823 19:10:53.281054   46108 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0823 19:10:53.281174   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0823 19:10:53.281283   46108 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0823 19:10:53.281405   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0823 19:10:53.281415   46108 kubeadm.go:322] 
	I0823 19:10:53.282388   46108 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 19:10:53.282504   46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0823 19:10:53.282642   46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0823 19:10:53.282711   46108 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0823 19:10:53.282768   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0823 19:10:54.260337   46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 19:10:54.271143   46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 19:10:54.280356   46108 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 19:10:54.280398   46108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 19:10:54.367150   46108 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0823 19:10:54.367267   46108 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 19:10:54.516397   46108 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 19:10:54.516522   46108 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 19:10:54.516630   46108 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 19:10:54.605518   46108 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 19:10:54.607189   46108 out.go:204]   - Generating certificates and keys ...
	I0823 19:10:54.607326   46108 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 19:10:54.607436   46108 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 19:10:54.609419   46108 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0823 19:10:54.609531   46108 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0823 19:10:54.609663   46108 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0823 19:10:54.609759   46108 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0823 19:10:54.609851   46108 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0823 19:10:54.609940   46108 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0823 19:10:54.610052   46108 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0823 19:10:54.610162   46108 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0823 19:10:54.610207   46108 kubeadm.go:322] [certs] Using the existing "sa" key
	I0823 19:10:54.610294   46108 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 19:10:54.824778   46108 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 19:10:54.960319   46108 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 19:10:55.064971   46108 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 19:10:55.389165   46108 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 19:10:55.407543   46108 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 19:10:55.409088   46108 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 19:10:55.409283   46108 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 19:10:55.561726   46108 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 19:10:55.563621   46108 out.go:204]   - Booting up control plane ...
	I0823 19:10:55.563738   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 19:10:55.572947   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 19:10:55.577429   46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 19:10:55.580595   46108 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 19:10:55.585959   46108 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 19:11:35.586874   46108 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0823 19:14:55.590709   46108 kubeadm.go:322] 
	I0823 19:14:55.590790   46108 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0823 19:14:55.590864   46108 kubeadm.go:322] 		timed out waiting for the condition
	I0823 19:14:55.590894   46108 kubeadm.go:322] 
	I0823 19:14:55.590939   46108 kubeadm.go:322] 	This error is likely caused by:
	I0823 19:14:55.590982   46108 kubeadm.go:322] 		- The kubelet is not running
	I0823 19:14:55.591069   46108 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0823 19:14:55.591075   46108 kubeadm.go:322] 
	I0823 19:14:55.591160   46108 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0823 19:14:55.591187   46108 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0823 19:14:55.591213   46108 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0823 19:14:55.591217   46108 kubeadm.go:322] 
	I0823 19:14:55.591325   46108 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0823 19:14:55.591392   46108 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0823 19:14:55.591397   46108 kubeadm.go:322] 
	I0823 19:14:55.591479   46108 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0823 19:14:55.591556   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0823 19:14:55.591619   46108 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0823 19:14:55.591683   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0823 19:14:55.591687   46108 kubeadm.go:322] 
	I0823 19:14:55.593060   46108 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 19:14:55.593189   46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0823 19:14:55.593273   46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0823 19:14:55.593330   46108 kubeadm.go:406] StartCluster complete in 12m28.59741012s
	I0823 19:14:55.593365   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:14:55.593412   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:14:55.617288   46108 cri.go:89] found id: "10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
	I0823 19:14:55.617321   46108 cri.go:89] found id: ""
	I0823 19:14:55.617329   46108 logs.go:284] 1 containers: [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623]
	I0823 19:14:55.617385   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.621825   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:14:55.621912   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:14:55.641056   46108 cri.go:89] found id: "ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
	I0823 19:14:55.641081   46108 cri.go:89] found id: ""
	I0823 19:14:55.641090   46108 logs.go:284] 1 containers: [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f]
	I0823 19:14:55.641145   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.645786   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:14:55.645856   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:14:55.662999   46108 cri.go:89] found id: ""
	I0823 19:14:55.663026   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.663036   46108 logs.go:286] No container was found matching "coredns"
	I0823 19:14:55.663044   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:14:55.663103   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:14:55.679379   46108 cri.go:89] found id: "806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
	I0823 19:14:55.679404   46108 cri.go:89] found id: ""
	I0823 19:14:55.679413   46108 logs.go:284] 1 containers: [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba]
	I0823 19:14:55.679469   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.683405   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:14:55.683466   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:14:55.701440   46108 cri.go:89] found id: ""
	I0823 19:14:55.701463   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.701472   46108 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:14:55.701480   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:14:55.701555   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:14:55.719282   46108 cri.go:89] found id: "57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
	I0823 19:14:55.719315   46108 cri.go:89] found id: ""
	I0823 19:14:55.719323   46108 logs.go:284] 1 containers: [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98]
	I0823 19:14:55.719380   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.723402   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:14:55.723471   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:14:55.740370   46108 cri.go:89] found id: ""
	I0823 19:14:55.740394   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.740403   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:14:55.740409   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:14:55.740475   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:14:55.756481   46108 cri.go:89] found id: ""
	I0823 19:14:55.756511   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.756520   46108 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:14:55.756538   46108 logs.go:123] Gathering logs for kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] ...
	I0823 19:14:55.756552   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
	I0823 19:14:55.829722   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:14:55.829759   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:14:55.892510   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:14:55.892547   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:14:55.918032   46108 logs.go:123] Gathering logs for kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] ...
	I0823 19:14:55.918075   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
	I0823 19:14:55.947621   46108 logs.go:123] Gathering logs for etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] ...
	I0823 19:14:55.947654   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
	I0823 19:14:55.966771   46108 logs.go:123] Gathering logs for kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] ...
	I0823 19:14:55.966813   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
	I0823 19:14:56.012530   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:14:56.012566   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:14:56.077734   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:14:56.077769   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:14:56.090478   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:14:56.090510   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:14:56.204896   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0823 19:14:56.204953   46108 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0823 19:14:56.204988   46108 out.go:239] * 
	* 
	W0823 19:14:56.205061   46108 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:14:56.205089   46108 out.go:239] * 
	* 
	W0823 19:14:56.205977   46108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 19:14:56.209130   46108 out.go:177] 
	W0823 19:14:56.210519   46108 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:14:56.210560   46108 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0823 19:14:56.210585   46108 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0823 19:14:56.212168   46108 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.22.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-23 19:14:56.729867823 +0000 UTC m=+3739.326779234
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502460 -n running-upgrade-502460
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502460 -n running-upgrade-502460: exit status 2 (228.930941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p running-upgrade-502460 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-573325 sudo                                  | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo                                  | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo cat                              | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo cat                              | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo                                  | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo                                  | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo                                  | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo find                             | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-573325 sudo crio                             | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-573325                                       | bridge-573325                | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	| start   | -p no-preload-301101                                   | no-preload-301101            | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-355473        | old-k8s-version-355473       | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-355473                              | old-k8s-version-355473       | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-301101             | no-preload-301101            | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-301101                                   | no-preload-301101            | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-228249                              | stopped-upgrade-228249       | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
	| start   | -p                                                     | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:13 UTC |
	|         | default-k8s-diff-port-319240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-355473             | old-k8s-version-355473       | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-355473                              | old-k8s-version-355473       | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-301101                  | no-preload-301101            | jenkins | v1.31.2 | 23 Aug 23 19:12 UTC | 23 Aug 23 19:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-301101                                   | no-preload-301101            | jenkins | v1.31.2 | 23 Aug 23 19:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-319240  | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:13 UTC | 23 Aug 23 19:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:13 UTC | 23 Aug 23 19:14 UTC |
	|         | default-k8s-diff-port-319240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-319240       | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:14 UTC | 23 Aug 23 19:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:14 UTC |                     |
	|         | default-k8s-diff-port-319240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 19:14:52
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 19:14:52.632040   60591 out.go:296] Setting OutFile to fd 1 ...
	I0823 19:14:52.632176   60591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 19:14:52.632183   60591 out.go:309] Setting ErrFile to fd 2...
	I0823 19:14:52.632187   60591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 19:14:52.632367   60591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 19:14:52.632907   60591 out.go:303] Setting JSON to false
	I0823 19:14:52.634001   60591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7037,"bootTime":1692811056,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 19:14:52.634062   60591 start.go:138] virtualization: kvm guest
	I0823 19:14:52.636307   60591 out.go:177] * [default-k8s-diff-port-319240] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 19:14:52.637602   60591 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 19:14:52.638798   60591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 19:14:52.637651   60591 notify.go:220] Checking for updates...
	I0823 19:14:52.641021   60591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 19:14:52.642396   60591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 19:14:52.643647   60591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 19:14:52.644931   60591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 19:14:52.646528   60591 config.go:182] Loaded profile config "default-k8s-diff-port-319240": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 19:14:52.646970   60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:14:52.647016   60591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:14:52.662151   60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0823 19:14:52.662569   60591 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:14:52.663120   60591 main.go:141] libmachine: Using API Version  1
	I0823 19:14:52.663147   60591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:14:52.663556   60591 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:14:52.663754   60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
	I0823 19:14:52.663995   60591 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 19:14:52.664284   60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:14:52.664312   60591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:14:52.678128   60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0823 19:14:52.678538   60591 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:14:52.678985   60591 main.go:141] libmachine: Using API Version  1
	I0823 19:14:52.679007   60591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:14:52.679373   60591 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:14:52.679565   60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
	I0823 19:14:52.714567   60591 out.go:177] * Using the kvm2 driver based on existing profile
	I0823 19:14:52.715876   60591 start.go:298] selected driver: kvm2
	I0823 19:14:52.715886   60591 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-319240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-319240 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.123 Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exp
osedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 19:14:52.715977   60591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 19:14:52.716590   60591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 19:14:52.716678   60591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0823 19:14:52.733023   60591 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0823 19:14:52.733418   60591 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 19:14:52.733453   60591 cni.go:84] Creating CNI manager for ""
	I0823 19:14:52.733461   60591 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 19:14:52.733474   60591 start_flags.go:319] config:
	{Name:default-k8s-diff-port-319240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-319240 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.123 Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 19:14:52.733650   60591 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 19:14:52.736139   60591 out.go:177] * Starting control plane node default-k8s-diff-port-319240 in cluster default-k8s-diff-port-319240
	I0823 19:14:52.737169   60591 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0823 19:14:52.737196   60591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0823 19:14:52.737204   60591 cache.go:57] Caching tarball of preloaded images
	I0823 19:14:52.737251   60591 preload.go:174] Found /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0823 19:14:52.737262   60591 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on containerd
	I0823 19:14:52.737368   60591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/default-k8s-diff-port-319240/config.json ...
	I0823 19:14:52.737564   60591 start.go:365] acquiring machines lock for default-k8s-diff-port-319240: {Name:mk1833667e1e194459e10edb6eaddedbcc5a0864 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 19:14:52.737606   60591 start.go:369] acquired machines lock for "default-k8s-diff-port-319240" in 22.707µs
	I0823 19:14:52.737621   60591 start.go:96] Skipping create...Using existing machine configuration
	I0823 19:14:52.737629   60591 fix.go:54] fixHost starting: 
	I0823 19:14:52.737879   60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 19:14:52.737902   60591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 19:14:52.752555   60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0823 19:14:52.752952   60591 main.go:141] libmachine: () Calling .GetVersion
	I0823 19:14:52.753431   60591 main.go:141] libmachine: Using API Version  1
	I0823 19:14:52.753451   60591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 19:14:52.753775   60591 main.go:141] libmachine: () Calling .GetMachineName
	I0823 19:14:52.753961   60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
	I0823 19:14:52.754122   60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .GetState
	I0823 19:14:52.755783   60591 fix.go:102] recreateIfNeeded on default-k8s-diff-port-319240: state=Stopped err=<nil>
	I0823 19:14:52.755808   60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
	W0823 19:14:52.755953   60591 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 19:14:52.757648   60591 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-319240" ...
	I0823 19:14:55.590709   46108 kubeadm.go:322] 
	I0823 19:14:55.590790   46108 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0823 19:14:55.590864   46108 kubeadm.go:322] 		timed out waiting for the condition
	I0823 19:14:55.590894   46108 kubeadm.go:322] 
	I0823 19:14:55.590939   46108 kubeadm.go:322] 	This error is likely caused by:
	I0823 19:14:55.590982   46108 kubeadm.go:322] 		- The kubelet is not running
	I0823 19:14:55.591069   46108 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0823 19:14:55.591075   46108 kubeadm.go:322] 
	I0823 19:14:55.591160   46108 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0823 19:14:55.591187   46108 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0823 19:14:55.591213   46108 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0823 19:14:55.591217   46108 kubeadm.go:322] 
	I0823 19:14:55.591325   46108 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0823 19:14:55.591392   46108 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0823 19:14:55.591397   46108 kubeadm.go:322] 
	I0823 19:14:55.591479   46108 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0823 19:14:55.591556   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0823 19:14:55.591619   46108 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0823 19:14:55.591683   46108 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0823 19:14:55.591687   46108 kubeadm.go:322] 
	I0823 19:14:55.593060   46108 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 19:14:55.593189   46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0823 19:14:55.593273   46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0823 19:14:55.593330   46108 kubeadm.go:406] StartCluster complete in 12m28.59741012s
	I0823 19:14:55.593365   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:14:55.593412   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:14:55.617288   46108 cri.go:89] found id: "10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
	I0823 19:14:55.617321   46108 cri.go:89] found id: ""
	I0823 19:14:55.617329   46108 logs.go:284] 1 containers: [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623]
	I0823 19:14:55.617385   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.621825   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:14:55.621912   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:14:55.641056   46108 cri.go:89] found id: "ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
	I0823 19:14:55.641081   46108 cri.go:89] found id: ""
	I0823 19:14:55.641090   46108 logs.go:284] 1 containers: [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f]
	I0823 19:14:55.641145   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.645786   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:14:55.645856   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:14:55.662999   46108 cri.go:89] found id: ""
	I0823 19:14:55.663026   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.663036   46108 logs.go:286] No container was found matching "coredns"
	I0823 19:14:55.663044   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:14:55.663103   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:14:55.679379   46108 cri.go:89] found id: "806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
	I0823 19:14:55.679404   46108 cri.go:89] found id: ""
	I0823 19:14:55.679413   46108 logs.go:284] 1 containers: [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba]
	I0823 19:14:55.679469   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.683405   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:14:55.683466   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:14:55.701440   46108 cri.go:89] found id: ""
	I0823 19:14:55.701463   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.701472   46108 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:14:55.701480   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:14:55.701555   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:14:55.719282   46108 cri.go:89] found id: "57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
	I0823 19:14:55.719315   46108 cri.go:89] found id: ""
	I0823 19:14:55.719323   46108 logs.go:284] 1 containers: [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98]
	I0823 19:14:55.719380   46108 ssh_runner.go:195] Run: which crictl
	I0823 19:14:55.723402   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:14:55.723471   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:14:55.740370   46108 cri.go:89] found id: ""
	I0823 19:14:55.740394   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.740403   46108 logs.go:286] No container was found matching "kindnet"
	I0823 19:14:55.740409   46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:14:55.740475   46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:14:55.756481   46108 cri.go:89] found id: ""
	I0823 19:14:55.756511   46108 logs.go:284] 0 containers: []
	W0823 19:14:55.756520   46108 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:14:55.756538   46108 logs.go:123] Gathering logs for kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] ...
	I0823 19:14:55.756552   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
	I0823 19:14:55.829722   46108 logs.go:123] Gathering logs for containerd ...
	I0823 19:14:55.829759   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:14:55.892510   46108 logs.go:123] Gathering logs for container status ...
	I0823 19:14:55.892547   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:14:55.918032   46108 logs.go:123] Gathering logs for kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] ...
	I0823 19:14:55.918075   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
	I0823 19:14:55.947621   46108 logs.go:123] Gathering logs for etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] ...
	I0823 19:14:55.947654   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
	I0823 19:14:55.966771   46108 logs.go:123] Gathering logs for kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] ...
	I0823 19:14:55.966813   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
	I0823 19:14:56.012530   46108 logs.go:123] Gathering logs for kubelet ...
	I0823 19:14:56.012566   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:14:56.077734   46108 logs.go:123] Gathering logs for dmesg ...
	I0823 19:14:56.077769   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:14:56.090478   46108 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:14:56.090510   46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:14:56.204896   46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0823 19:14:56.204953   46108 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0823 19:14:56.204988   46108 out.go:239] * 
	W0823 19:14:56.205061   46108 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:14:56.205089   46108 out.go:239] * 
	W0823 19:14:56.205977   46108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 19:14:56.209130   46108 out.go:177] 
	W0823 19:14:56.210519   46108 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:14:56.210560   46108 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0823 19:14:56.210585   46108 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0823 19:14:56.212168   46108 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	57269c19a146e       ae24db9aa2cc0       49 seconds ago      Exited              kube-controller-manager   4                   d43e217fe53b1
	ed5e28de76164       0369cf4303ffd       57 seconds ago      Exited              etcd                      5                   928ccbda06851
	10baea8ae55ec       106ff58d43082       57 seconds ago      Exited              kube-apiserver            4                   abe62d19cd085
	806087a328e88       f917b8c8f55b7       3 minutes ago       Running             kube-scheduler            0                   a721825474a44
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2023-08-23 19:00:44 UTC, end at Wed 2023-08-23 19:14:57 UTC. --
	Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.908786689Z" level=error msg="Failed to pipe stderr of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\"" error="reading from a closed fifo"
	Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909029808Z" level=info msg="Finish piping stderr of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\""
	Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909358158Z" level=error msg="Failed to pipe stdout of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\"" error="reading from a closed fifo"
	Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909476009Z" level=info msg="Finish piping stdout of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\""
	Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.913322355Z" level=error msg="StartContainer for \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\" failed" error="failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: \"etcd\": executable file not found in $PATH: unknown"
	Aug 23 19:14:00 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:00.197236999Z" level=info msg="RemoveContainer for \"900a6eb03e89e19798431089a59a61c435c2969be6edb17671baf9201161108e\""
	Aug 23 19:14:00 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:00.204080067Z" level=info msg="RemoveContainer for \"900a6eb03e89e19798431089a59a61c435c2969be6edb17671baf9201161108e\" returns successfully"
	Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.676362551Z" level=info msg="CreateContainer within sandbox \"d43e217fe53b131454e7218bf8fa52be082b22c9ce7fad672a442a0ab705c1c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}"
	Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.713382594Z" level=info msg="CreateContainer within sandbox \"d43e217fe53b131454e7218bf8fa52be082b22c9ce7fad672a442a0ab705c1c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
	Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.714246107Z" level=info msg="StartContainer for \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
	Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.861362831Z" level=info msg="StartContainer for \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\" returns successfully"
	Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.333963832Z" level=info msg="Finish piping stderr of container \"10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623\""
	Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.334085852Z" level=info msg="Finish piping stdout of container \"10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623\""
	Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.336875833Z" level=info msg="TaskExit event &TaskExit{ContainerID:10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623,ID:10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623,Pid:13271,ExitStatus:1,ExitedAt:2023-08-23 19:14:20.336451071 +0000 UTC,XXX_unrecognized:[],}"
	Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.383863500Z" level=info msg="shim disconnected" id=10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623
	Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.384105693Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed"
	Aug 23 19:14:21 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:21.255581558Z" level=info msg="RemoveContainer for \"28ac3f3d7fcf94b7074a060c90afbf8b20f9c6c023f0cee413a83ebf592f0ca6\""
	Aug 23 19:14:21 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:21.261020242Z" level=info msg="RemoveContainer for \"28ac3f3d7fcf94b7074a060c90afbf8b20f9c6c023f0cee413a83ebf592f0ca6\" returns successfully"
	Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.567624247Z" level=info msg="Finish piping stdout of container \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
	Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.567878776Z" level=info msg="Finish piping stderr of container \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
	Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.570055153Z" level=info msg="TaskExit event &TaskExit{ContainerID:57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98,ID:57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98,Pid:13322,ExitStatus:255,ExitedAt:2023-08-23 19:14:28.569462851 +0000 UTC,XXX_unrecognized:[],}"
	Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.614336221Z" level=info msg="shim disconnected" id=57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98
	Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.614437301Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed"
	Aug 23 19:14:29 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:29.277098508Z" level=info msg="RemoveContainer for \"3c522063da00c32ff3d2e9d4c2597ed629acbce70133e030376a02d1a2374961\""
	Aug 23 19:14:29 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:29.282839472Z" level=info msg="RemoveContainer for \"3c522063da00c32ff3d2e9d4c2597ed629acbce70133e030376a02d1a2374961\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.028930] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.802046] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +1.084040] vboxguest: loading out-of-tree module taints kernel.
	[  +0.004644] vboxguest: PCI device not found, probably running on physical hardware.
	[  +2.090082] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[Aug23 19:01] systemd-fstab-generator[2102]: Ignoring "noauto" for root device
	[  +0.131450] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
	[  +0.188266] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	[ +34.364376] systemd-fstab-generator[2638]: Ignoring "noauto" for root device
	[ +16.815080] systemd-fstab-generator[3054]: Ignoring "noauto" for root device
	[Aug23 19:02] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.686858] systemd-fstab-generator[3631]: Ignoring "noauto" for root device
	[  +0.255652] systemd-fstab-generator[3654]: Ignoring "noauto" for root device
	[  +0.179236] systemd-fstab-generator[3677]: Ignoring "noauto" for root device
	[  +0.369608] systemd-fstab-generator[3739]: Ignoring "noauto" for root device
	[  +3.934059] kauditd_printk_skb: 71 callbacks suppressed
	[  +4.689110] systemd-fstab-generator[4112]: Ignoring "noauto" for root device
	[  +3.849395] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.243365] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.359217] systemd-fstab-generator[5312]: Ignoring "noauto" for root device
	[ +11.066871] NFSD: Unable to end grace period: -110
	[Aug23 19:06] kauditd_printk_skb: 5 callbacks suppressed
	[  +3.086392] systemd-fstab-generator[11151]: Ignoring "noauto" for root device
	[Aug23 19:10] systemd-fstab-generator[12390]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] <==
	* 
	* 
	* ==> kernel <==
	*  19:14:57 up 14 min,  0 users,  load average: 0.13, 0.24, 0.22
	Linux running-upgrade-502460 4.19.182 #1 SMP Fri Jul 2 00:45:17 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] <==
	* Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0823 19:13:59.963658       1 server.go:629] external host was not specified, using 192.168.61.47
	I0823 19:13:59.964587       1 server.go:181] Version: v1.21.2
	I0823 19:14:00.319353       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0823 19:14:00.320778       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0823 19:14:00.320881       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0823 19:14:00.322760       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0823 19:14:00.322966       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0823 19:14:00.326841       1 client.go:360] parsed scheme: "endpoint"
	I0823 19:14:00.327270       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	W0823 19:14:00.327989       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0823 19:14:01.319552       1 client.go:360] parsed scheme: "endpoint"
	I0823 19:14:01.319599       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	W0823 19:14:01.319908       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:01.329024       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:02.320757       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:03.061592       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:03.808670       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:06.021433       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:06.652921       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:10.772963       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:11.440753       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:17.323782       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0823 19:14:18.623507       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	Error: context deadline exceeded
	
	* 
	* ==> kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:151 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc0008e6c80, 0x203000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(...)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00067f710)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00067f710, 0x500bf00, 0xc000e43230, 0x4b25f01, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00067f710, 0x3b9aca00, 0x0, 0x1, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00067f710, 0x3b9aca00, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d1
	
	goroutine 146 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00067f720, 0x500bf00, 0xc000e43200, 0x4b25f01, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00067f720, 0xdf8475800, 0x0, 0x1, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00067f720, 0xdf8475800, 0xc0001000c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	* 
	* ==> kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] <==
	* E0823 19:13:47.830467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:13:52.375525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:13:55.121793       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.47:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:13:57.943329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	I0823 19:14:10.383895       1 trace.go:205] Trace[890790537]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (23-Aug-2023 19:14:00.381) (total time: 10002ms):
	Trace[890790537]: [10.002060061s] [10.002060061s] END
	E0823 19:14:10.383985       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0823 19:14:17.221520       1 trace.go:205] Trace[5207339]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (23-Aug-2023 19:14:07.220) (total time: 10001ms):
	Trace[5207339]: [10.00138842s] [10.00138842s] END
	E0823 19:14:17.221609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0823 19:14:17.441603       1 trace.go:205] Trace[1349132384]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (23-Aug-2023 19:14:07.440) (total time: 10000ms):
	Trace[1349132384]: [10.00087358s] [10.00087358s] END
	E0823 19:14:17.441668       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0823 19:14:21.336063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:21.336430       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:21.336564       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:21.337297       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.47:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:22.437512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:24.641954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:31.784488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:37.195691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:40.376831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:42.808330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:49.257685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.47:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	E0823 19:14:56.008003       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2023-08-23 19:00:44 UTC, end at Wed 2023-08-23 19:14:57 UTC. --
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.323262   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: I0823 19:14:55.352025   12398 kubelet_node_status.go:71] "Attempting to register node" node="running-upgrade-502460"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.352714   12398 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.47:8443: connect: connection refused" node="running-upgrade-502460"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.424103   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.524768   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.626037   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.726225   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.826812   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.927025   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.028036   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.128434   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.229326   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.329947   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.430884   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.531477   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.632533   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.733613   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.834531   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.936413   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.036967   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.137231   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.238384   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.338562   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.439225   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.539510   12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0823 19:14:57.451370   60720 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-502460 -n running-upgrade-502460
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-502460 -n running-upgrade-502460: exit status 2 (227.908785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-502460" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-502460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-502460
E0823 19:14:58.449706   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-502460: (1.44303674s)
--- FAIL: TestRunningBinaryUpgrade (909.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1019.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1940340879.exe start -p stopped-upgrade-228249 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0823 18:54:28.918201   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.1940340879.exe start -p stopped-upgrade-228249 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (3m3.093431925s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.1940340879.exe -p stopped-upgrade-228249 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.1940340879.exe -p stopped-upgrade-228249 stop: (2.103559335s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-228249 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-228249 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (13m53.984227521s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-228249] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-228249 in cluster stopped-upgrade-228249
	* Downloading Kubernetes v1.21.2 preload ...
	* Restarting existing kvm2 VM for "stopped-upgrade-228249" ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:57:30.540220   42158 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:57:30.540354   42158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:57:30.540366   42158 out.go:309] Setting ErrFile to fd 2...
	I0823 18:57:30.540372   42158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:57:30.540574   42158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:57:30.541121   42158 out.go:303] Setting JSON to false
	I0823 18:57:30.542086   42158 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5995,"bootTime":1692811056,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:57:30.542145   42158 start.go:138] virtualization: kvm guest
	I0823 18:57:30.544342   42158 out.go:177] * [stopped-upgrade-228249] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 18:57:30.545754   42158 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 18:57:30.546963   42158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:57:30.545793   42158 notify.go:220] Checking for updates...
	I0823 18:57:30.549529   42158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:57:30.551048   42158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:57:30.552377   42158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 18:57:30.553870   42158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 18:57:30.555974   42158 config.go:182] Loaded profile config "stopped-upgrade-228249": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0823 18:57:30.556510   42158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:57:30.556569   42158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:57:30.571095   42158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0823 18:57:30.571587   42158 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:57:30.572237   42158 main.go:141] libmachine: Using API Version  1
	I0823 18:57:30.572264   42158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:57:30.572671   42158 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:57:30.572873   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:57:30.574691   42158 out.go:177] * Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	I0823 18:57:30.576171   42158 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 18:57:30.576491   42158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:57:30.576533   42158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:57:30.591909   42158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I0823 18:57:30.592379   42158 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:57:30.592866   42158 main.go:141] libmachine: Using API Version  1
	I0823 18:57:30.592890   42158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:57:30.593169   42158 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:57:30.593352   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:57:30.631330   42158 out.go:177] * Using the kvm2 driver based on existing profile
	I0823 18:57:30.632536   42158 start.go:298] selected driver: kvm2
	I0823 18:57:30.632550   42158 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-228249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade
-228249 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:57:30.632647   42158 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 18:57:30.633386   42158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:57:30.633476   42158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0823 18:57:30.651193   42158 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0823 18:57:30.651557   42158 cni.go:84] Creating CNI manager for ""
	I0823 18:57:30.651573   42158 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 18:57:30.651590   42158 start_flags.go:319] config:
	{Name:stopped-upgrade-228249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-228249 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0823 18:57:30.651767   42158 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:57:30.653493   42158 out.go:177] * Starting control plane node stopped-upgrade-228249 in cluster stopped-upgrade-228249
	I0823 18:57:30.654607   42158 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0823 18:57:31.258492   42158 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
	I0823 18:57:31.258525   42158 cache.go:57] Caching tarball of preloaded images
	I0823 18:57:31.258656   42158 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0823 18:57:31.260753   42158 out.go:177] * Downloading Kubernetes v1.21.2 preload ...
	I0823 18:57:31.262072   42158 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:57:31.433747   42158 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:1962145810117db2773062d16463a139 -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
	I0823 18:57:53.280641   42158 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:57:53.280734   42158 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:57:54.196170   42158 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0823 18:57:54.196295   42158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/config.json ...
	I0823 18:57:54.199107   42158 start.go:365] acquiring machines lock for stopped-upgrade-228249: {Name:mk1833667e1e194459e10edb6eaddedbcc5a0864 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 18:58:34.374559   42158 start.go:369] acquired machines lock for "stopped-upgrade-228249" in 40.175392811s
	I0823 18:58:34.374630   42158 start.go:96] Skipping create...Using existing machine configuration
	I0823 18:58:34.374646   42158 fix.go:54] fixHost starting: 
	I0823 18:58:34.375105   42158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:58:34.375158   42158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:58:34.394959   42158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I0823 18:58:34.395368   42158 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:58:34.395944   42158 main.go:141] libmachine: Using API Version  1
	I0823 18:58:34.395965   42158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:58:34.396308   42158 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:58:34.396522   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:34.396691   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetState
	I0823 18:58:34.398301   42158 fix.go:102] recreateIfNeeded on stopped-upgrade-228249: state=Stopped err=<nil>
	I0823 18:58:34.398336   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	W0823 18:58:34.398531   42158 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 18:58:34.400475   42158 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-228249" ...
	I0823 18:58:34.401639   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .Start
	I0823 18:58:34.401812   42158 main.go:141] libmachine: (stopped-upgrade-228249) Ensuring networks are active...
	I0823 18:58:34.402631   42158 main.go:141] libmachine: (stopped-upgrade-228249) Ensuring network default is active
	I0823 18:58:34.402976   42158 main.go:141] libmachine: (stopped-upgrade-228249) Ensuring network mk-stopped-upgrade-228249 is active
	I0823 18:58:34.403441   42158 main.go:141] libmachine: (stopped-upgrade-228249) Getting domain xml...
	I0823 18:58:34.404315   42158 main.go:141] libmachine: (stopped-upgrade-228249) Creating domain...
	I0823 18:58:35.746954   42158 main.go:141] libmachine: (stopped-upgrade-228249) Waiting to get IP...
	I0823 18:58:35.747916   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:35.748309   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:35.748397   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:35.748302   43646 retry.go:31] will retry after 263.054695ms: waiting for machine to come up
	I0823 18:58:36.012951   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:36.013394   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:36.013456   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:36.013391   43646 retry.go:31] will retry after 358.611513ms: waiting for machine to come up
	I0823 18:58:36.374113   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:36.374590   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:36.374614   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:36.374555   43646 retry.go:31] will retry after 373.650067ms: waiting for machine to come up
	I0823 18:58:36.750002   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:36.750526   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:36.750553   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:36.750472   43646 retry.go:31] will retry after 498.92962ms: waiting for machine to come up
	I0823 18:58:37.251376   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:37.251875   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:37.251903   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:37.251829   43646 retry.go:31] will retry after 631.26831ms: waiting for machine to come up
	I0823 18:58:37.884360   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:37.884870   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:37.884902   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:37.884800   43646 retry.go:31] will retry after 869.419326ms: waiting for machine to come up
	I0823 18:58:38.755973   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:38.756597   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:38.756627   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:38.756544   43646 retry.go:31] will retry after 1.143781878s: waiting for machine to come up
	I0823 18:58:39.902230   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:39.902657   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:39.902685   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:39.902630   43646 retry.go:31] will retry after 1.390551219s: waiting for machine to come up
	I0823 18:58:41.294925   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:41.295387   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:41.295421   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:41.295314   43646 retry.go:31] will retry after 1.409277108s: waiting for machine to come up
	I0823 18:58:42.706803   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:42.707375   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:42.707406   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:42.707307   43646 retry.go:31] will retry after 1.827278306s: waiting for machine to come up
	I0823 18:58:44.536663   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:44.537183   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:44.537213   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:44.537122   43646 retry.go:31] will retry after 2.374560259s: waiting for machine to come up
	I0823 18:58:46.913038   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:46.913569   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:46.913599   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:46.913514   43646 retry.go:31] will retry after 2.833856564s: waiting for machine to come up
	I0823 18:58:49.748570   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:49.749022   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | unable to find current IP address of domain stopped-upgrade-228249 in network mk-stopped-upgrade-228249
	I0823 18:58:49.749044   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | I0823 18:58:49.748965   43646 retry.go:31] will retry after 4.408755022s: waiting for machine to come up
	I0823 18:58:54.159680   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.160174   42158 main.go:141] libmachine: (stopped-upgrade-228249) Found IP for machine: 192.168.72.172
	I0823 18:58:54.160193   42158 main.go:141] libmachine: (stopped-upgrade-228249) Reserving static IP address...
	I0823 18:58:54.160205   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has current primary IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.160627   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "stopped-upgrade-228249", mac: "52:54:00:8f:a8:75", ip: "192.168.72.172"} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.160663   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | skip adding static IP to network mk-stopped-upgrade-228249 - found existing host DHCP lease matching {name: "stopped-upgrade-228249", mac: "52:54:00:8f:a8:75", ip: "192.168.72.172"}
	I0823 18:58:54.160682   42158 main.go:141] libmachine: (stopped-upgrade-228249) Reserved static IP address: 192.168.72.172
	I0823 18:58:54.160701   42158 main.go:141] libmachine: (stopped-upgrade-228249) Waiting for SSH to be available...
	I0823 18:58:54.160721   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | Getting to WaitForSSH function...
	I0823 18:58:54.163096   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.163445   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.163483   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.163628   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | Using SSH client type: external
	I0823 18:58:54.163654   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | Using SSH private key: /home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa (-rw-------)
	I0823 18:58:54.163684   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0823 18:58:54.163713   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | About to run SSH command:
	I0823 18:58:54.163730   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | exit 0
	I0823 18:58:54.297525   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | SSH cmd err, output: <nil>: 
	I0823 18:58:54.297937   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetConfigRaw
	I0823 18:58:54.298533   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetIP
	I0823 18:58:54.301488   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.301876   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.301942   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.302127   42158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/config.json ...
	I0823 18:58:54.302375   42158 machine.go:88] provisioning docker machine ...
	I0823 18:58:54.302398   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:54.302613   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetMachineName
	I0823 18:58:54.302768   42158 buildroot.go:166] provisioning hostname "stopped-upgrade-228249"
	I0823 18:58:54.302788   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetMachineName
	I0823 18:58:54.302936   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.305594   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.305937   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.305966   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.306138   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:54.306326   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.306501   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.306663   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:54.306824   42158 main.go:141] libmachine: Using SSH client type: native
	I0823 18:58:54.307251   42158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0823 18:58:54.307270   42158 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-228249 && echo "stopped-upgrade-228249" | sudo tee /etc/hostname
	I0823 18:58:54.432717   42158 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-228249
	
	I0823 18:58:54.432746   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.435648   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.436079   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.436108   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.436268   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:54.436484   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.436694   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.436847   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:54.437025   42158 main.go:141] libmachine: Using SSH client type: native
	I0823 18:58:54.437406   42158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0823 18:58:54.437425   42158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-228249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-228249/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-228249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 18:58:54.563665   42158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 18:58:54.563694   42158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17086-11104/.minikube CaCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17086-11104/.minikube}
	I0823 18:58:54.563725   42158 buildroot.go:174] setting up certificates
	I0823 18:58:54.563737   42158 provision.go:83] configureAuth start
	I0823 18:58:54.563751   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetMachineName
	I0823 18:58:54.564010   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetIP
	I0823 18:58:54.567036   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.567357   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.567391   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.567509   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.569731   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.570075   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.570107   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.570175   42158 provision.go:138] copyHostCerts
	I0823 18:58:54.570226   42158 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem, removing ...
	I0823 18:58:54.570242   42158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem
	I0823 18:58:54.570300   42158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem (1078 bytes)
	I0823 18:58:54.570387   42158 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem, removing ...
	I0823 18:58:54.570394   42158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem
	I0823 18:58:54.570412   42158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem (1123 bytes)
	I0823 18:58:54.570465   42158 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem, removing ...
	I0823 18:58:54.570472   42158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem
	I0823 18:58:54.570488   42158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem (1675 bytes)
	I0823 18:58:54.570539   42158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-228249 san=[192.168.72.172 192.168.72.172 localhost 127.0.0.1 minikube stopped-upgrade-228249]
	I0823 18:58:54.705858   42158 provision.go:172] copyRemoteCerts
	I0823 18:58:54.705941   42158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 18:58:54.705974   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.708851   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.709163   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.709190   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.709397   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:54.709621   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.709786   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:54.709920   42158 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa Username:docker}
	I0823 18:58:54.796523   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 18:58:54.810913   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0823 18:58:54.824764   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 18:58:54.839053   42158 provision.go:86] duration metric: configureAuth took 275.301827ms
	I0823 18:58:54.839079   42158 buildroot.go:189] setting minikube options for container-runtime
	I0823 18:58:54.839272   42158 config.go:182] Loaded profile config "stopped-upgrade-228249": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0823 18:58:54.839301   42158 machine.go:91] provisioned docker machine in 536.910892ms
	I0823 18:58:54.839313   42158 start.go:300] post-start starting for "stopped-upgrade-228249" (driver="kvm2")
	I0823 18:58:54.839327   42158 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 18:58:54.839358   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:54.839665   42158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 18:58:54.839692   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.842516   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.842862   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.842892   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.843006   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:54.843192   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.843358   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:54.843549   42158 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa Username:docker}
	I0823 18:58:54.932858   42158 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 18:58:54.936794   42158 info.go:137] Remote host: Buildroot 2020.02.12
	I0823 18:58:54.936818   42158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/addons for local assets ...
	I0823 18:58:54.936893   42158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/files for local assets ...
	I0823 18:58:54.936996   42158 filesync.go:149] local asset: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0823 18:58:54.937187   42158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0823 18:58:54.943542   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0823 18:58:54.957810   42158 start.go:303] post-start completed in 118.482456ms
	I0823 18:58:54.957831   42158 fix.go:56] fixHost completed within 20.58319099s
	I0823 18:58:54.957856   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:54.960616   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.960922   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:54.960961   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:54.961062   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:54.961267   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.961459   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:54.961632   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:54.961831   42158 main.go:141] libmachine: Using SSH client type: native
	I0823 18:58:54.962207   42158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0823 18:58:54.962219   42158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0823 18:58:55.082529   42158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692817135.018907693
	
	I0823 18:58:55.082561   42158 fix.go:206] guest clock: 1692817135.018907693
	I0823 18:58:55.082572   42158 fix.go:219] Guest: 2023-08-23 18:58:55.018907693 +0000 UTC Remote: 2023-08-23 18:58:54.957835778 +0000 UTC m=+84.453230391 (delta=61.071915ms)
	I0823 18:58:55.082598   42158 fix.go:190] guest clock delta is within tolerance: 61.071915ms
	I0823 18:58:55.082606   42158 start.go:83] releasing machines lock for "stopped-upgrade-228249", held for 20.70800775s
	I0823 18:58:55.082636   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:55.082928   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetIP
	I0823 18:58:55.086159   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.086655   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:55.086687   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.086892   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:55.087564   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:55.087767   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .DriverName
	I0823 18:58:55.087880   42158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 18:58:55.087940   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:55.088004   42158 ssh_runner.go:195] Run: cat /version.json
	I0823 18:58:55.088029   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHHostname
	I0823 18:58:55.090804   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.091086   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.091249   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:55.091291   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.091430   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:55.091547   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:55.091580   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:55.091609   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:55.091731   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHPort
	I0823 18:58:55.091803   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:55.091981   42158 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa Username:docker}
	I0823 18:58:55.091997   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHKeyPath
	I0823 18:58:55.092139   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetSSHUsername
	I0823 18:58:55.092260   42158 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/stopped-upgrade-228249/id_rsa Username:docker}
	W0823 18:58:55.200776   42158 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0823 18:58:55.200866   42158 ssh_runner.go:195] Run: systemctl --version
	I0823 18:58:55.206263   42158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 18:58:55.211709   42158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 18:58:55.211783   42158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 18:58:55.223562   42158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 18:58:55.223579   42158 start.go:466] detecting cgroup driver to use...
	I0823 18:58:55.223637   42158 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 18:58:55.246176   42158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 18:58:55.257635   42158 docker.go:196] disabling cri-docker service (if available) ...
	I0823 18:58:55.257697   42158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0823 18:58:55.267788   42158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0823 18:58:55.277269   42158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0823 18:58:55.287863   42158 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0823 18:58:55.287940   42158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0823 18:58:55.441084   42158 docker.go:212] disabling docker service ...
	I0823 18:58:55.441157   42158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0823 18:58:55.453088   42158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0823 18:58:55.464597   42158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0823 18:58:55.608102   42158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0823 18:58:55.781913   42158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0823 18:58:55.795863   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 18:58:55.812377   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0823 18:58:55.821114   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 18:58:55.829506   42158 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 18:58:55.829585   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 18:58:55.837874   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 18:58:55.845208   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 18:58:55.852351   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 18:58:55.859384   42158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 18:58:55.867455   42158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 18:58:55.874753   42158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 18:58:55.880806   42158 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0823 18:58:55.880862   42158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0823 18:58:55.891459   42158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 18:58:55.898050   42158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 18:58:56.029325   42158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 18:58:56.078118   42158 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0823 18:58:56.078212   42158 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0823 18:58:56.085521   42158 retry.go:31] will retry after 1.020189587s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0823 18:58:57.106726   42158 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0823 18:58:57.111853   42158 start.go:534] Will wait 60s for crictl version
	I0823 18:58:57.111929   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:58:57.116159   42158 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0823 18:58:57.136735   42158 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0823 18:58:57.136815   42158 ssh_runner.go:195] Run: containerd --version
	I0823 18:58:57.171644   42158 ssh_runner.go:195] Run: containerd --version
	I0823 18:58:57.199668   42158 out.go:177] * Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	I0823 18:58:57.201341   42158 main.go:141] libmachine: (stopped-upgrade-228249) Calling .GetIP
	I0823 18:58:57.204416   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:57.204890   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:a8:75", ip: ""} in network mk-stopped-upgrade-228249: {Iface:virbr4 ExpiryTime:2023-08-23 19:55:56 +0000 UTC Type:0 Mac:52:54:00:8f:a8:75 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:stopped-upgrade-228249 Clientid:01:52:54:00:8f:a8:75}
	I0823 18:58:57.204921   42158 main.go:141] libmachine: (stopped-upgrade-228249) DBG | domain stopped-upgrade-228249 has defined IP address 192.168.72.172 and MAC address 52:54:00:8f:a8:75 in network mk-stopped-upgrade-228249
	I0823 18:58:57.205111   42158 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0823 18:58:57.209357   42158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 18:58:57.219046   42158 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0823 18:58:57.219119   42158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0823 18:58:57.239445   42158 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
	I0823 18:58:57.239532   42158 ssh_runner.go:195] Run: which lz4
	I0823 18:58:57.243411   42158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0823 18:58:57.247985   42158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0823 18:58:57.248021   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (483579245 bytes)
	I0823 18:58:58.950645   42158 containerd.go:547] Took 1.707277 seconds to copy over tarball
	I0823 18:58:58.950712   42158 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 18:59:02.508725   42158 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.557990561s)
	I0823 18:59:02.508750   42158 containerd.go:554] Took 3.558079 seconds to extract the tarball
	I0823 18:59:02.508761   42158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 18:59:02.545452   42158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 18:59:02.675082   42158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 18:59:02.712751   42158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0823 18:59:03.734082   42158 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.021294943s)
	I0823 18:59:03.734224   42158 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
	I0823 18:59:03.734238   42158 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.21.2 registry.k8s.io/kube-controller-manager:v1.21.2 registry.k8s.io/kube-scheduler:v1.21.2 registry.k8s.io/kube-proxy:v1.21.2 registry.k8s.io/pause:3.4.1 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0823 18:59:03.734297   42158 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 18:59:03.734337   42158 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 18:59:03.734368   42158 image.go:134] retrieving image: registry.k8s.io/pause:3.4.1
	I0823 18:59:03.734409   42158 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 18:59:03.734505   42158 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 18:59:03.734351   42158 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0823 18:59:03.734561   42158 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 18:59:03.734557   42158 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 18:59:03.736139   42158 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 18:59:03.736200   42158 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 18:59:03.736266   42158 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.0: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 18:59:03.736140   42158 image.go:177] daemon lookup for registry.k8s.io/pause:3.4.1: Error response from daemon: No such image: registry.k8s.io/pause:3.4.1
	I0823 18:59:03.736523   42158 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 18:59:03.736567   42158 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 18:59:03.736628   42158 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 18:59:03.736747   42158 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0823 18:59:03.979995   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1"
	I0823 18:59:04.025229   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.21.2"
	I0823 18:59:04.157405   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0"
	I0823 18:59:04.157893   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2"
	I0823 18:59:04.176357   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2"
	I0823 18:59:04.219777   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0"
	I0823 18:59:04.265877   42158 cache_images.go:116] "registry.k8s.io/pause:3.4.1" needs transfer: "registry.k8s.io/pause:3.4.1" does not exist at hash "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253" in container runtime
	I0823 18:59:04.265938   42158 cri.go:218] Removing image: registry.k8s.io/pause:3.4.1
	I0823 18:59:04.265990   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:04.287641   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.21.2"
	I0823 18:59:04.490065   42158 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.21.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.21.2" does not exist at hash "106ff58d4308243e0042862435f5a0b14dd332d8151f17a739046c7df33c7ae6" in container runtime
	I0823 18:59:04.490112   42158 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.21.2
	I0823 18:59:04.490162   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:04.749645   42158 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.21.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.21.2" does not exist at hash "ae24db9aa2cc0d8572cc5c1c0eda9f40e0a8170cecefe742a5d7f1d4170f4eb1" in container runtime
	I0823 18:59:04.749695   42158 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 18:59:04.749745   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:04.846216   42158 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.0" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.0" does not exist at hash "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899" in container runtime
	I0823 18:59:04.846267   42158 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.0
	I0823 18:59:04.846315   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:04.897268   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0823 18:59:05.074151   42158 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.21.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.21.2" does not exist at hash "f917b8c8f55b7fd9bcd895920e2c16fb3e3770c94eba844262a57a55c6187d86" in container runtime
	I0823 18:59:05.074220   42158 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.21.2
	I0823 18:59:05.074272   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:05.088225   42158 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0823 18:59:05.088268   42158 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0823 18:59:05.088305   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:05.088304   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/pause:3.4.1
	I0823 18:59:05.202243   42158 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.21.2" needs transfer: "registry.k8s.io/kube-proxy:v1.21.2" does not exist at hash "a6ebd1c1ad9810239a2885494ae92e0230224bafcb39ef1433c6cb49a98b0dfe" in container runtime
	I0823 18:59:05.202292   42158 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.21.2
	I0823 18:59:05.202319   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-apiserver:v1.21.2
	I0823 18:59:05.202334   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:05.202366   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.21.2
	I0823 18:59:05.202500   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.0
	I0823 18:59:05.332080   42158 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0823 18:59:05.332131   42158 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 18:59:05.332176   42158 ssh_runner.go:195] Run: which crictl
	I0823 18:59:05.332192   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-scheduler:v1.21.2
	I0823 18:59:05.332279   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0823 18:59:05.332444   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.4.1
	I0823 18:59:05.336874   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0
	I0823 18:59:05.336932   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-proxy:v1.21.2
	I0823 18:59:05.346741   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.21.2
	I0823 18:59:05.346780   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.21.2
	I0823 18:59:05.358049   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.21.2
	I0823 18:59:05.358169   42158 ssh_runner.go:195] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 18:59:05.371011   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0823 18:59:05.374888   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.21.2
	I0823 18:59:05.417073   42158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0823 18:59:05.417196   42158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0823 18:59:05.422924   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0823 18:59:05.475939   42158 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0823 18:59:05.476011   42158 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0823 18:59:06.112360   42158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0823 18:59:06.112417   42158 cache_images.go:92] LoadImages completed in 2.378169288s
	W0823 18:59:06.112487   42158 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.4.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.4.1: no such file or directory
	I0823 18:59:06.112556   42158 ssh_runner.go:195] Run: sudo crictl info
	I0823 18:59:06.130534   42158 cni.go:84] Creating CNI manager for ""
	I0823 18:59:06.130556   42158 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 18:59:06.130573   42158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 18:59:06.130594   42158 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.172 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-228249 NodeName:stopped-upgrade-228249 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0823 18:59:06.130714   42158 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-228249"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 18:59:06.130787   42158 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-228249 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-228249 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 18:59:06.130838   42158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0823 18:59:06.137395   42158 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 18:59:06.137475   42158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 18:59:06.143823   42158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0823 18:59:06.154525   42158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 18:59:06.164957   42158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0823 18:59:06.176181   42158 ssh_runner.go:195] Run: grep 192.168.72.172	control-plane.minikube.internal$ /etc/hosts
	I0823 18:59:06.179671   42158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 18:59:06.188621   42158 certs.go:56] Setting up /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249 for IP: 192.168.72.172
	I0823 18:59:06.188663   42158 certs.go:190] acquiring lock for shared ca certs: {Name:mk306615e8137283da7a256d08e7c92ef0f9dd28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 18:59:06.188816   42158 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key
	I0823 18:59:06.188889   42158 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key
	I0823 18:59:06.189014   42158 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/client.key
	I0823 18:59:06.189105   42158 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/apiserver.key.c1ead329
	I0823 18:59:06.189169   42158 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/proxy-client.key
	I0823 18:59:06.189310   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem (1338 bytes)
	W0823 18:59:06.189350   42158 certs.go:433] ignoring /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0823 18:59:06.189375   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 18:59:06.189422   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem (1078 bytes)
	I0823 18:59:06.189455   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem (1123 bytes)
	I0823 18:59:06.189494   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem (1675 bytes)
	I0823 18:59:06.189568   42158 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0823 18:59:06.190479   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 18:59:06.205455   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 18:59:06.220487   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 18:59:06.237008   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0823 18:59:06.253785   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 18:59:06.270583   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0823 18:59:06.285029   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 18:59:06.299974   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0823 18:59:06.315688   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0823 18:59:06.329823   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 18:59:06.344129   42158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0823 18:59:06.358428   42158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 18:59:06.370349   42158 ssh_runner.go:195] Run: openssl version
	I0823 18:59:06.375816   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0823 18:59:06.382910   42158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0823 18:59:06.386981   42158 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:20 /usr/share/ca-certificates/183722.pem
	I0823 18:59:06.387033   42158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0823 18:59:06.393289   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0823 18:59:06.400604   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 18:59:06.407802   42158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 18:59:06.412049   42158 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0823 18:59:06.412101   42158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 18:59:06.417792   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 18:59:06.425099   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0823 18:59:06.434517   42158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0823 18:59:06.439657   42158 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:20 /usr/share/ca-certificates/18372.pem
	I0823 18:59:06.439712   42158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0823 18:59:06.445623   42158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0823 18:59:06.454663   42158 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 18:59:06.459228   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0823 18:59:06.465253   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0823 18:59:06.470883   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0823 18:59:06.476771   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0823 18:59:06.483522   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0823 18:59:06.489592   42158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0823 18:59:06.495511   42158 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-228249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrad
e-228249 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:59:06.495607   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0823 18:59:06.495654   42158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0823 18:59:06.511667   42158 cri.go:89] found id: ""
	I0823 18:59:06.511726   42158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 18:59:06.518900   42158 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0823 18:59:06.518923   42158 kubeadm.go:636] restartCluster start
	I0823 18:59:06.518964   42158 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0823 18:59:06.524795   42158 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0823 18:59:06.525438   42158 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-228249" does not appear in /home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:59:06.525841   42158 kubeconfig.go:146] "stopped-upgrade-228249" context is missing from /home/jenkins/minikube-integration/17086-11104/kubeconfig - will repair!
	I0823 18:59:06.526506   42158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17086-11104/kubeconfig: {Name:mkb6ab3495f5663c5ba2bb1ce0b9748373e0a0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 18:59:06.527394   42158 kapi.go:59] client config for stopped-upgrade-228249: &rest.Config{Host:"https://192.168.72.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/client.crt", KeyFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/stopped-upgrade-228249/client.key", CAFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0823 18:59:06.528164   42158 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0823 18:59:06.534753   42158 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -52,6 +52,8 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	@@ -68,3 +70,7 @@
	 metricsBindAddress: 0.0.0.0:10249
	 conntrack:
	   maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0823 18:59:06.534771   42158 kubeadm.go:1128] stopping kube-system containers ...
	I0823 18:59:06.534784   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0823 18:59:06.534827   42158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0823 18:59:06.552577   42158 cri.go:89] found id: ""
	I0823 18:59:06.552649   42158 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0823 18:59:06.564546   42158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 18:59:06.571815   42158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 18:59:06.571882   42158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 18:59:06.577853   42158 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0823 18:59:06.577877   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 18:59:06.719890   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 18:59:07.671944   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0823 18:59:07.885698   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 18:59:08.024722   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0823 18:59:08.146237   42158 api_server.go:52] waiting for apiserver process to appear ...
	I0823 18:59:08.146319   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:08.156005   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:08.677873   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:09.177799   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:09.677671   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:10.177327   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:10.678254   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:11.178053   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:11.677326   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:12.177269   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:12.678118   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:13.177310   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:13.677491   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:14.178112   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:14.677821   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:15.177394   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:15.677234   42158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:59:15.702614   42158 api_server.go:72] duration metric: took 7.55637447s to wait for apiserver process to appear ...
	I0823 18:59:15.702643   42158 api_server.go:88] waiting for apiserver healthz status ...
	I0823 18:59:15.702663   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:15.703281   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:15.703335   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:15.703985   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:16.204700   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:21.205168   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:21.205209   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:26.206214   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:26.206270   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:31.207151   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:31.207186   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:36.208350   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:36.208423   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:36.490540   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": read tcp 192.168.72.1:46694->192.168.72.172:8443: read: connection reset by peer
	I0823 18:59:36.705116   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:36.705815   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:37.204254   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:42.204966   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:42.205007   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:47.206133   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:47.206180   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:52.206741   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:52.206786   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:57.207460   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 18:59:57.207527   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:57.819675   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": read tcp 192.168.72.1:39380->192.168.72.172:8443: read: connection reset by peer
	I0823 18:59:57.819720   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:57.820182   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:58.204708   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:58.205309   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:58.704924   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:58.705552   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:59.204075   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:59.204585   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 18:59:59.704679   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 18:59:59.705231   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:00.204864   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:00.205471   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:00.704158   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:00.704729   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:01.204263   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:01.204770   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:01.704343   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:01.704969   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:02.204507   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:02.205046   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:02.704664   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:02.705201   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:03.204839   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:03.205406   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:03.705033   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:03.705674   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:04.204192   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:04.204763   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:04.704790   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:04.705356   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:05.204980   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:05.205568   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:05.705044   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:05.705600   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:06.204325   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:06.204922   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:06.704776   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:06.705323   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:07.204974   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:07.205537   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:07.704073   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:07.704675   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:08.204190   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:08.204762   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:08.704286   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:08.704842   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:09.204383   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:09.205009   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:09.705137   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:09.705776   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:10.204334   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:10.204935   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:10.705000   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:10.705654   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:11.204232   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:11.204864   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:11.704149   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:11.704754   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:12.204369   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:12.204953   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:12.704050   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:12.704690   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:13.204239   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:13.204814   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:13.704337   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:13.704991   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:14.204380   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:14.204994   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:14.704872   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:14.705490   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:15.205096   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:15.205688   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:15.704339   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:15.704436   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:15.728431   42158 cri.go:89] found id: "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	I0823 19:00:15.728457   42158 cri.go:89] found id: ""
	I0823 19:00:15.728466   42158 logs.go:284] 1 containers: [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6]
	I0823 19:00:15.728529   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:15.734512   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:15.734583   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:15.751066   42158 cri.go:89] found id: "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	I0823 19:00:15.751085   42158 cri.go:89] found id: ""
	I0823 19:00:15.751092   42158 logs.go:284] 1 containers: [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921]
	I0823 19:00:15.751144   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:15.754813   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:15.754870   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:15.771240   42158 cri.go:89] found id: ""
	I0823 19:00:15.771265   42158 logs.go:284] 0 containers: []
	W0823 19:00:15.771273   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:15.771279   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:15.771354   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:15.787843   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:15.787868   42158 cri.go:89] found id: ""
	I0823 19:00:15.787881   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:15.787941   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:15.791893   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:15.791953   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:15.811432   42158 cri.go:89] found id: ""
	I0823 19:00:15.811458   42158 logs.go:284] 0 containers: []
	W0823 19:00:15.811466   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:15.811472   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:15.811529   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:15.838133   42158 cri.go:89] found id: "3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:15.838152   42158 cri.go:89] found id: ""
	I0823 19:00:15.838158   42158 logs.go:284] 1 containers: [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f]
	I0823 19:00:15.838205   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:15.841870   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:15.841940   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:15.859980   42158 cri.go:89] found id: ""
	I0823 19:00:15.860008   42158 logs.go:284] 0 containers: []
	W0823 19:00:15.860020   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:15.860042   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:15.860104   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:15.876220   42158 cri.go:89] found id: ""
	I0823 19:00:15.876244   42158 logs.go:284] 0 containers: []
	W0823 19:00:15.876251   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:15.876267   42158 logs.go:123] Gathering logs for kube-controller-manager [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f] ...
	I0823 19:00:15.876279   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:15.907233   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:15.907267   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:15.969187   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:15.969231   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:16.033438   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:16.033489   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:16.191231   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:16.191259   42158 logs.go:123] Gathering logs for kube-apiserver [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6] ...
	I0823 19:00:16.191274   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	I0823 19:00:16.213119   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:16.213145   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:16.245728   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:16.245770   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:16.258220   42158 logs.go:123] Gathering logs for etcd [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921] ...
	I0823 19:00:16.258251   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	I0823 19:00:16.275252   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:16.275283   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:18.809953   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:18.810631   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:18.810699   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:18.810750   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:18.826358   42158 cri.go:89] found id: "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	I0823 19:00:18.826380   42158 cri.go:89] found id: ""
	I0823 19:00:18.826386   42158 logs.go:284] 1 containers: [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6]
	I0823 19:00:18.826436   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:18.830956   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:18.831045   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:18.851401   42158 cri.go:89] found id: "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	I0823 19:00:18.851431   42158 cri.go:89] found id: ""
	I0823 19:00:18.851442   42158 logs.go:284] 1 containers: [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921]
	I0823 19:00:18.851506   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:18.856214   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:18.856287   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:18.875067   42158 cri.go:89] found id: ""
	I0823 19:00:18.875095   42158 logs.go:284] 0 containers: []
	W0823 19:00:18.875105   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:18.875112   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:18.875187   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:18.901985   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:18.902010   42158 cri.go:89] found id: ""
	I0823 19:00:18.902020   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:18.902086   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:18.906847   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:18.906919   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:18.923240   42158 cri.go:89] found id: ""
	I0823 19:00:18.923266   42158 logs.go:284] 0 containers: []
	W0823 19:00:18.923276   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:18.923284   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:18.923347   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:18.941413   42158 cri.go:89] found id: "3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:18.941439   42158 cri.go:89] found id: ""
	I0823 19:00:18.941449   42158 logs.go:284] 1 containers: [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f]
	I0823 19:00:18.941509   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:18.947540   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:18.947615   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:18.965280   42158 cri.go:89] found id: ""
	I0823 19:00:18.965306   42158 logs.go:284] 0 containers: []
	W0823 19:00:18.965316   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:18.965323   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:18.965397   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:18.983573   42158 cri.go:89] found id: ""
	I0823 19:00:18.983594   42158 logs.go:284] 0 containers: []
	W0823 19:00:18.983602   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:18.983619   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:18.983634   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:19.065352   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:19.065389   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:19.107632   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:19.107668   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:19.170521   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:19.170567   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:19.197138   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:19.197176   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:19.208463   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:19.208498   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:19.301706   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:19.301739   42158 logs.go:123] Gathering logs for kube-apiserver [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6] ...
	I0823 19:00:19.301756   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	I0823 19:00:19.331560   42158 logs.go:123] Gathering logs for etcd [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921] ...
	I0823 19:00:19.331593   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	I0823 19:00:19.347735   42158 logs.go:123] Gathering logs for kube-controller-manager [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f] ...
	I0823 19:00:19.347773   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:21.883101   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:26.883789   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:00:26.883859   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:26.883928   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:26.901500   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:26.901519   42158 cri.go:89] found id: "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	I0823 19:00:26.901523   42158 cri.go:89] found id: ""
	I0823 19:00:26.901529   42158 logs.go:284] 2 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6]
	I0823 19:00:26.901582   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:26.905346   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:26.909981   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:26.910047   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:26.925560   42158 cri.go:89] found id: "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	I0823 19:00:26.925582   42158 cri.go:89] found id: ""
	I0823 19:00:26.925590   42158 logs.go:284] 1 containers: [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921]
	I0823 19:00:26.925647   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:26.929337   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:26.929399   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:26.944207   42158 cri.go:89] found id: ""
	I0823 19:00:26.944230   42158 logs.go:284] 0 containers: []
	W0823 19:00:26.944235   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:26.944240   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:26.944288   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:26.958860   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:26.958876   42158 cri.go:89] found id: ""
	I0823 19:00:26.958882   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:26.958926   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:26.962586   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:26.962631   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:26.978205   42158 cri.go:89] found id: ""
	I0823 19:00:26.978225   42158 logs.go:284] 0 containers: []
	W0823 19:00:26.978230   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:26.978263   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:26.978316   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:26.998174   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:26.998192   42158 cri.go:89] found id: "3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:26.998198   42158 cri.go:89] found id: ""
	I0823 19:00:26.998206   42158 logs.go:284] 2 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f]
	I0823 19:00:26.998257   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:27.002123   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:27.006025   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:27.006082   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:27.021488   42158 cri.go:89] found id: ""
	I0823 19:00:27.021525   42158 logs.go:284] 0 containers: []
	W0823 19:00:27.021531   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:27.021537   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:27.021609   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:27.037131   42158 cri.go:89] found id: ""
	I0823 19:00:27.037154   42158 logs.go:284] 0 containers: []
	W0823 19:00:27.037160   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:27.037168   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:27.037182   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:27.077264   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:27.077297   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:27.094630   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:27.094657   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0823 19:00:42.218766   42158 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (15.124083349s)
	W0823 19:00:42.218808   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:42.218829   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:42.218893   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:42.238901   42158 logs.go:123] Gathering logs for kube-apiserver [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6] ...
	I0823 19:00:42.238928   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6"
	W0823 19:00:42.256030   42158 logs.go:130] failed kube-apiserver [03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6" /bin/bash -c "sudo /bin/crictl logs --tail 400 03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:00:42.246515    3501 remote_runtime.go:329] ContainerStatus "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6": not found
	time="2023-08-23T19:00:42Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6\": not found"
	 output: 
	** stderr ** 
	E0823 19:00:42.246515    3501 remote_runtime.go:329] ContainerStatus "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6": not found
	time="2023-08-23T19:00:42Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"03b02c6f71ceb51326a088d6a8a3cfa9d02c8353e29b35be3ec50e16640a49f6\": not found"
	
	** /stderr **
	I0823 19:00:42.256059   42158 logs.go:123] Gathering logs for kube-controller-manager [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f] ...
	I0823 19:00:42.256075   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:42.286154   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:42.286186   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:42.332333   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:42.332370   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:42.364980   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:42.365009   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:42.437768   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:42.437803   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:42.450853   42158 logs.go:123] Gathering logs for etcd [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921] ...
	I0823 19:00:42.450876   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921"
	W0823 19:00:42.471750   42158 logs.go:130] failed etcd [1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921" /bin/bash -c "sudo /bin/crictl logs --tail 400 1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:00:42.462258    3524 remote_runtime.go:329] ContainerStatus "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921": not found
	time="2023-08-23T19:00:42Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921\": not found"
	 output: 
	** stderr ** 
	E0823 19:00:42.462258    3524 remote_runtime.go:329] ContainerStatus "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921": not found
	time="2023-08-23T19:00:42Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"1851ebe52a59804d0c03a1dcb9c8cdd205298c0146ceb883643b0020d70cf921\": not found"
	
	** /stderr **
	I0823 19:00:44.972896   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:44.973656   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:44.973704   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:44.973769   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:44.992327   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:44.992356   42158 cri.go:89] found id: ""
	I0823 19:00:44.992366   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:44.992425   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:44.997187   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:44.997251   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:45.015889   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:45.015919   42158 cri.go:89] found id: ""
	I0823 19:00:45.015928   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:00:45.015999   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:45.021154   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:45.021236   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:45.039794   42158 cri.go:89] found id: ""
	I0823 19:00:45.039819   42158 logs.go:284] 0 containers: []
	W0823 19:00:45.039829   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:45.039836   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:45.039912   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:45.056662   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:45.056681   42158 cri.go:89] found id: ""
	I0823 19:00:45.056688   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:45.056737   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:45.060416   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:45.060482   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:45.076475   42158 cri.go:89] found id: ""
	I0823 19:00:45.076504   42158 logs.go:284] 0 containers: []
	W0823 19:00:45.076515   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:45.076539   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:45.076601   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:45.098421   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:45.098444   42158 cri.go:89] found id: "3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:45.098450   42158 cri.go:89] found id: ""
	I0823 19:00:45.098458   42158 logs.go:284] 2 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f]
	I0823 19:00:45.098521   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:45.102754   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:45.106678   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:45.106752   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:45.125721   42158 cri.go:89] found id: ""
	I0823 19:00:45.125756   42158 logs.go:284] 0 containers: []
	W0823 19:00:45.125774   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:45.125782   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:45.125854   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:45.145462   42158 cri.go:89] found id: ""
	I0823 19:00:45.145488   42158 logs.go:284] 0 containers: []
	W0823 19:00:45.145504   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:45.145521   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:45.145552   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:45.175003   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:45.175044   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:45.219879   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:45.219929   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:45.245412   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:45.245506   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:45.298034   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:45.298069   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:45.322143   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:00:45.322175   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:45.342254   42158 logs.go:123] Gathering logs for kube-controller-manager [3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f] ...
	I0823 19:00:45.342305   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3a2bc538e3833c880cff7ada3716fd5bc27cce166ab016b2eadfc44e473c690f"
	I0823 19:00:45.375962   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:45.375998   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:45.438900   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:45.438941   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:45.450288   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:45.450317   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:45.539580   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:48.040678   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:48.041298   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:48.041369   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:48.041435   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:48.060918   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:48.060940   42158 cri.go:89] found id: ""
	I0823 19:00:48.060947   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:48.061002   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:48.065166   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:48.065216   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:48.084200   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:48.084220   42158 cri.go:89] found id: ""
	I0823 19:00:48.084230   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:00:48.084281   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:48.087986   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:48.088036   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:48.106451   42158 cri.go:89] found id: ""
	I0823 19:00:48.106477   42158 logs.go:284] 0 containers: []
	W0823 19:00:48.106487   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:48.106514   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:48.106564   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:48.123359   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:48.123380   42158 cri.go:89] found id: ""
	I0823 19:00:48.123387   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:48.123435   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:48.127181   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:48.127235   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:48.142770   42158 cri.go:89] found id: ""
	I0823 19:00:48.142799   42158 logs.go:284] 0 containers: []
	W0823 19:00:48.142808   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:48.142817   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:48.142902   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:48.159235   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:48.159254   42158 cri.go:89] found id: ""
	I0823 19:00:48.159262   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:00:48.159317   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:48.162951   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:48.163008   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:48.179511   42158 cri.go:89] found id: ""
	I0823 19:00:48.179536   42158 logs.go:284] 0 containers: []
	W0823 19:00:48.179543   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:48.179548   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:48.179591   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:48.194956   42158 cri.go:89] found id: ""
	I0823 19:00:48.194977   42158 logs.go:284] 0 containers: []
	W0823 19:00:48.194984   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:48.194996   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:48.195010   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:48.258559   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:48.258597   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:48.289083   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:48.289115   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:48.336499   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:00:48.336539   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:48.354151   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:48.354180   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:48.393735   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:48.393769   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:48.414607   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:48.414635   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:48.423609   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:48.423631   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:48.496861   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:48.496889   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:48.496899   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:51.017821   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:51.018372   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:51.018420   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:51.018465   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:51.036639   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:51.036658   42158 cri.go:89] found id: ""
	I0823 19:00:51.036665   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:51.036708   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:51.040720   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:51.040783   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:51.056992   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:51.057019   42158 cri.go:89] found id: ""
	I0823 19:00:51.057029   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:00:51.057083   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:51.060818   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:51.060881   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:51.074839   42158 cri.go:89] found id: ""
	I0823 19:00:51.074859   42158 logs.go:284] 0 containers: []
	W0823 19:00:51.074872   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:51.074879   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:51.074944   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:51.090874   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:51.090901   42158 cri.go:89] found id: ""
	I0823 19:00:51.090910   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:51.090975   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:51.094660   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:51.094708   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:51.110048   42158 cri.go:89] found id: ""
	I0823 19:00:51.110081   42158 logs.go:284] 0 containers: []
	W0823 19:00:51.110090   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:51.110098   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:51.110157   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:51.126750   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:51.126779   42158 cri.go:89] found id: ""
	I0823 19:00:51.126787   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:00:51.126848   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:51.131108   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:51.131169   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:51.149056   42158 cri.go:89] found id: ""
	I0823 19:00:51.149085   42158 logs.go:284] 0 containers: []
	W0823 19:00:51.149095   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:51.149102   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:51.149164   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:51.165290   42158 cri.go:89] found id: ""
	I0823 19:00:51.165320   42158 logs.go:284] 0 containers: []
	W0823 19:00:51.165330   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:51.165350   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:51.165366   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:51.174806   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:51.174831   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:51.244721   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:51.244748   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:00:51.244762   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:51.260836   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:51.260867   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:51.294813   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:51.294842   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:51.320285   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:51.320323   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:51.379661   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:51.379702   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:51.402640   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:51.402664   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:51.441638   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:51.441668   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:53.990694   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:53.991293   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:53.991337   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:53.991385   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:54.006328   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:54.006354   42158 cri.go:89] found id: ""
	I0823 19:00:54.006362   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:54.006411   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:54.010240   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:54.010313   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:54.025468   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:54.025493   42158 cri.go:89] found id: ""
	I0823 19:00:54.025502   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:00:54.025574   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:54.029189   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:54.029254   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:54.043616   42158 cri.go:89] found id: ""
	I0823 19:00:54.043642   42158 logs.go:284] 0 containers: []
	W0823 19:00:54.043649   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:54.043654   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:54.043705   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:54.057854   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:54.057873   42158 cri.go:89] found id: ""
	I0823 19:00:54.057880   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:54.057922   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:54.061766   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:54.061830   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:54.080830   42158 cri.go:89] found id: ""
	I0823 19:00:54.080851   42158 logs.go:284] 0 containers: []
	W0823 19:00:54.080857   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:54.080863   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:54.080912   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:54.095746   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:54.095764   42158 cri.go:89] found id: ""
	I0823 19:00:54.095770   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:00:54.095826   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:54.099540   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:54.099599   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:54.114759   42158 cri.go:89] found id: ""
	I0823 19:00:54.114785   42158 logs.go:284] 0 containers: []
	W0823 19:00:54.114792   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:54.114797   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:54.114851   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:54.128793   42158 cri.go:89] found id: ""
	I0823 19:00:54.128819   42158 logs.go:284] 0 containers: []
	W0823 19:00:54.128828   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:54.128842   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:54.128860   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:54.200451   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:54.200478   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:54.200493   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:54.224576   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:00:54.224604   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:54.241910   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:54.241941   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:54.273809   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:54.273845   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:54.321316   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:54.321355   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:54.378841   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:54.378877   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:54.389129   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:54.389159   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:54.424960   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:54.424987   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:56.949943   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:56.950526   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:56.950572   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:56.950616   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:56.970253   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:56.970270   42158 cri.go:89] found id: ""
	I0823 19:00:56.970277   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:56.970324   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:56.975359   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:00:56.975440   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:00:56.993434   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:56.993459   42158 cri.go:89] found id: ""
	I0823 19:00:56.993468   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:00:56.993529   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:56.998965   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:00:56.999036   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:00:57.019931   42158 cri.go:89] found id: ""
	I0823 19:00:57.019954   42158 logs.go:284] 0 containers: []
	W0823 19:00:57.019963   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:00:57.019971   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:00:57.020021   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:00:57.041373   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:57.041394   42158 cri.go:89] found id: ""
	I0823 19:00:57.041403   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:00:57.041446   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:57.046479   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:00:57.046549   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:00:57.069837   42158 cri.go:89] found id: ""
	I0823 19:00:57.069859   42158 logs.go:284] 0 containers: []
	W0823 19:00:57.069874   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:00:57.069883   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:00:57.069935   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:00:57.088545   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:57.088563   42158 cri.go:89] found id: ""
	I0823 19:00:57.088570   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:00:57.088613   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:00:57.093211   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:00:57.093277   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:00:57.114320   42158 cri.go:89] found id: ""
	I0823 19:00:57.114343   42158 logs.go:284] 0 containers: []
	W0823 19:00:57.114350   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:00:57.114356   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:00:57.114401   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:00:57.134675   42158 cri.go:89] found id: ""
	I0823 19:00:57.134697   42158 logs.go:284] 0 containers: []
	W0823 19:00:57.134705   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:00:57.134719   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:00:57.134730   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:57.158610   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:00:57.158633   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:00:57.175737   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:00:57.175765   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:00:57.215637   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:00:57.215673   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:00:57.250032   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:00:57.250069   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:00:57.298768   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:00:57.298801   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:00:57.366969   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:00:57.367008   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:00:57.378310   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:00:57.378342   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:00:57.450814   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:00:57.450852   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:00:57.450875   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:00:59.976708   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:00:59.977435   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:00:59.977483   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:00:59.977531   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:00:59.995737   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:00:59.995760   42158 cri.go:89] found id: ""
	I0823 19:00:59.995768   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:00:59.995826   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:00.001304   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:00.001379   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:00.016423   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:00.016445   42158 cri.go:89] found id: ""
	I0823 19:01:00.016452   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:00.016496   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:00.020921   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:00.021001   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:00.040147   42158 cri.go:89] found id: ""
	I0823 19:01:00.040173   42158 logs.go:284] 0 containers: []
	W0823 19:01:00.040183   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:00.040191   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:00.040253   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:00.056793   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:00.056818   42158 cri.go:89] found id: ""
	I0823 19:01:00.056827   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:00.056883   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:00.062430   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:00.062503   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:00.083439   42158 cri.go:89] found id: ""
	I0823 19:01:00.083470   42158 logs.go:284] 0 containers: []
	W0823 19:01:00.083480   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:00.083489   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:00.083553   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:00.101159   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:00.101182   42158 cri.go:89] found id: ""
	I0823 19:01:00.101189   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:00.101237   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:00.106487   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:00.106545   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:00.123248   42158 cri.go:89] found id: ""
	I0823 19:01:00.123279   42158 logs.go:284] 0 containers: []
	W0823 19:01:00.123290   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:00.123297   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:00.123358   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:00.146067   42158 cri.go:89] found id: ""
	I0823 19:01:00.146092   42158 logs.go:284] 0 containers: []
	W0823 19:01:00.146102   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:00.146120   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:00.146135   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:00.167951   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:00.167986   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:00.233431   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:00.233467   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:00.272022   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:00.272052   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:00.303703   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:00.303734   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:00.321006   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:00.321038   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:00.377095   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:00.377130   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:00.387814   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:00.387843   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:00.464132   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:00.464163   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:00.464178   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:02.985251   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:02.985865   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:02.985932   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:02.985990   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:03.008270   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:03.008296   42158 cri.go:89] found id: ""
	I0823 19:01:03.008304   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:01:03.008358   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:03.013640   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:03.013709   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:03.036073   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:03.036097   42158 cri.go:89] found id: ""
	I0823 19:01:03.036106   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:03.036154   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:03.042066   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:03.042135   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:03.057990   42158 cri.go:89] found id: ""
	I0823 19:01:03.058017   42158 logs.go:284] 0 containers: []
	W0823 19:01:03.058028   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:03.058037   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:03.058095   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:03.075721   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:03.075746   42158 cri.go:89] found id: ""
	I0823 19:01:03.075753   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:03.075799   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:03.079759   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:03.079818   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:03.095946   42158 cri.go:89] found id: ""
	I0823 19:01:03.095973   42158 logs.go:284] 0 containers: []
	W0823 19:01:03.095983   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:03.095991   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:03.096054   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:03.114021   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:03.114053   42158 cri.go:89] found id: ""
	I0823 19:01:03.114063   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:03.114112   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:03.119449   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:03.119516   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:03.139886   42158 cri.go:89] found id: ""
	I0823 19:01:03.139915   42158 logs.go:284] 0 containers: []
	W0823 19:01:03.139933   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:03.139942   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:03.140004   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:03.155620   42158 cri.go:89] found id: ""
	I0823 19:01:03.155656   42158 logs.go:284] 0 containers: []
	W0823 19:01:03.155665   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:03.155682   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:03.155696   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:03.228009   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:03.228055   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:03.299502   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:03.299528   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:03.299544   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:03.315619   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:03.315654   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:03.360074   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:03.360106   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:03.379510   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:03.379536   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:03.388581   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:03.388608   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:03.414325   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:03.414348   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:03.453586   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:03.453618   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:06.001182   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:06.001922   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:06.001986   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:06.002053   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:06.024781   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:06.024807   42158 cri.go:89] found id: ""
	I0823 19:01:06.024816   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:01:06.024875   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:06.029953   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:06.030035   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:06.049538   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:06.049583   42158 cri.go:89] found id: ""
	I0823 19:01:06.049591   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:06.049647   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:06.053795   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:06.053867   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:06.072312   42158 cri.go:89] found id: ""
	I0823 19:01:06.072342   42158 logs.go:284] 0 containers: []
	W0823 19:01:06.072353   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:06.072361   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:06.072422   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:06.094538   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:06.094563   42158 cri.go:89] found id: ""
	I0823 19:01:06.094572   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:06.094629   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:06.099238   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:06.099305   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:06.123225   42158 cri.go:89] found id: ""
	I0823 19:01:06.123259   42158 logs.go:284] 0 containers: []
	W0823 19:01:06.123269   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:06.123277   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:06.123349   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:06.146012   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:06.146042   42158 cri.go:89] found id: ""
	I0823 19:01:06.146052   42158 logs.go:284] 1 containers: [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:06.146115   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:06.151185   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:06.151247   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:06.170717   42158 cri.go:89] found id: ""
	I0823 19:01:06.170743   42158 logs.go:284] 0 containers: []
	W0823 19:01:06.170753   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:06.170759   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:06.170828   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:06.189728   42158 cri.go:89] found id: ""
	I0823 19:01:06.189754   42158 logs.go:284] 0 containers: []
	W0823 19:01:06.189763   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:06.189778   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:06.189792   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:06.201512   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:06.201560   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:06.295103   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:06.295132   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:06.295153   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:06.346319   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:06.346365   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:06.412705   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:06.412740   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:06.443677   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:06.443701   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:06.529858   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:06.529898   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:06.552745   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:06.552782   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:06.591324   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:06.591374   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:09.112320   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:09.113011   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:09.113071   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:09.113138   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:09.133275   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:09.133304   42158 cri.go:89] found id: ""
	I0823 19:01:09.133313   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:01:09.133371   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:09.138186   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:09.138251   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:09.159095   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:09.159122   42158 cri.go:89] found id: ""
	I0823 19:01:09.159132   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:09.159189   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:09.164587   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:09.164657   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:09.184465   42158 cri.go:89] found id: ""
	I0823 19:01:09.184498   42158 logs.go:284] 0 containers: []
	W0823 19:01:09.184508   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:09.184517   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:09.184594   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:09.205738   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:09.205766   42158 cri.go:89] found id: ""
	I0823 19:01:09.205775   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:09.205841   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:09.210951   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:09.211025   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:09.231323   42158 cri.go:89] found id: ""
	I0823 19:01:09.231348   42158 logs.go:284] 0 containers: []
	W0823 19:01:09.231368   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:09.231376   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:09.231441   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:09.251905   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:09.251934   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:09.251941   42158 cri.go:89] found id: ""
	I0823 19:01:09.251949   42158 logs.go:284] 2 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:09.252005   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:09.257180   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:09.261908   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:09.261976   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:09.280486   42158 cri.go:89] found id: ""
	I0823 19:01:09.280513   42158 logs.go:284] 0 containers: []
	W0823 19:01:09.280523   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:09.280530   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:09.280590   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:09.299455   42158 cri.go:89] found id: ""
	I0823 19:01:09.299486   42158 logs.go:284] 0 containers: []
	W0823 19:01:09.299496   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:09.299511   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:09.299558   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:09.317947   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:09.317977   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:09.368573   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:09.368607   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:09.389654   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:09.389682   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:09.460132   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:09.460173   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:09.546207   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:09.546235   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:09.546251   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:09.570615   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:09.570650   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:09.609745   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:09.609778   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:09.673900   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:09.673951   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:09.702413   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:09.702454   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:12.215421   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:12.216005   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:12.216062   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:12.216135   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:12.234131   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:12.234159   42158 cri.go:89] found id: ""
	I0823 19:01:12.234167   42158 logs.go:284] 1 containers: [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:01:12.234216   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:12.238476   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:12.238544   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:12.255742   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:12.255767   42158 cri.go:89] found id: ""
	I0823 19:01:12.255775   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:12.255833   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:12.259675   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:12.259733   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:12.275165   42158 cri.go:89] found id: ""
	I0823 19:01:12.275187   42158 logs.go:284] 0 containers: []
	W0823 19:01:12.275194   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:12.275200   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:12.275263   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:12.290453   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:12.290476   42158 cri.go:89] found id: ""
	I0823 19:01:12.290484   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:12.290542   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:12.294578   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:12.294633   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:12.315285   42158 cri.go:89] found id: ""
	I0823 19:01:12.315312   42158 logs.go:284] 0 containers: []
	W0823 19:01:12.315322   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:12.315329   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:12.315393   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:12.333171   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:12.333196   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:12.333203   42158 cri.go:89] found id: ""
	I0823 19:01:12.333212   42158 logs.go:284] 2 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:12.333268   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:12.337582   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:12.341581   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:12.341637   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:12.358204   42158 cri.go:89] found id: ""
	I0823 19:01:12.358232   42158 logs.go:284] 0 containers: []
	W0823 19:01:12.358243   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:12.358250   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:12.358311   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:12.376084   42158 cri.go:89] found id: ""
	I0823 19:01:12.376110   42158 logs.go:284] 0 containers: []
	W0823 19:01:12.376121   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:12.376141   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:12.376157   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:12.439070   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:12.439105   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:12.458486   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:12.458517   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:12.475203   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:12.475231   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:12.500211   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:12.500239   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:12.511852   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:12.511893   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:12.591689   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:12.591713   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:12.591725   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:12.638835   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:12.638873   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:12.660630   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:12.660667   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:12.725465   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:12.725512   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:15.295038   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:20.295705   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:01:20.295773   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:20.295838   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:20.319375   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:20.319422   42158 cri.go:89] found id: "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	I0823 19:01:20.319428   42158 cri.go:89] found id: ""
	I0823 19:01:20.319435   42158 logs.go:284] 2 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]
	I0823 19:01:20.319489   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.323288   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.327836   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:20.327896   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:20.349359   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:20.349387   42158 cri.go:89] found id: ""
	I0823 19:01:20.349396   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:20.349456   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.353577   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:20.353637   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:20.372786   42158 cri.go:89] found id: ""
	I0823 19:01:20.372808   42158 logs.go:284] 0 containers: []
	W0823 19:01:20.372816   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:20.372823   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:20.372881   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:20.389467   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:20.389494   42158 cri.go:89] found id: ""
	I0823 19:01:20.389503   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:20.389572   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.393456   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:20.393513   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:20.410243   42158 cri.go:89] found id: ""
	I0823 19:01:20.410274   42158 logs.go:284] 0 containers: []
	W0823 19:01:20.410284   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:20.410293   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:20.410360   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:20.428194   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:20.428222   42158 cri.go:89] found id: "623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:20.428228   42158 cri.go:89] found id: ""
	I0823 19:01:20.428236   42158 logs.go:284] 2 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b]
	I0823 19:01:20.428292   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.432534   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:20.436449   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:20.436516   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:20.451987   42158 cri.go:89] found id: ""
	I0823 19:01:20.452012   42158 logs.go:284] 0 containers: []
	W0823 19:01:20.452021   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:20.452029   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:20.452085   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:20.469452   42158 cri.go:89] found id: ""
	I0823 19:01:20.469483   42158 logs.go:284] 0 containers: []
	W0823 19:01:20.469493   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:20.469506   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:20.469522   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:20.511575   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:20.511611   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:20.533202   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:20.533235   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:20.597785   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:20.597830   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0823 19:01:34.594092   42158 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (13.996239717s)
	W0823 19:01:34.594136   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:34.594150   42158 logs.go:123] Gathering logs for kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9] ...
	I0823 19:01:34.594162   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9"
	W0823 19:01:34.624938   42158 logs.go:130] failed kube-apiserver [7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9" /bin/bash -c "sudo /bin/crictl logs --tail 400 7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9": Process exited with status 1
	stdout:
	
	stderr:
	E0823 19:01:34.615314    4482 remote_runtime.go:329] ContainerStatus "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9": not found
	time="2023-08-23T19:01:34Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9\": not found"
	 output: 
	** stderr ** 
	E0823 19:01:34.615314    4482 remote_runtime.go:329] ContainerStatus "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9": not found
	time="2023-08-23T19:01:34Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"7f68e1fada9481af11a43f2369794e594f802aee6b664b4b13e2f7461c948dd9\": not found"
	
	** /stderr **
	I0823 19:01:34.624980   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:34.624995   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:34.649913   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:34.649961   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:34.695033   42158 logs.go:123] Gathering logs for kube-controller-manager [623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b] ...
	I0823 19:01:34.695072   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 623f689bf6ef6889dd9d4bb6c40227ff3f27cc3d1764414a7cbe9c133c02079b"
	I0823 19:01:34.740053   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:34.740090   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:34.810837   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:34.810875   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:34.823386   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:34.823419   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:37.349507   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:37.350145   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:37.350203   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:37.350260   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:37.369282   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:37.369302   42158 cri.go:89] found id: ""
	I0823 19:01:37.369311   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:37.369360   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:37.374636   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:37.374707   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:37.395618   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:37.395637   42158 cri.go:89] found id: ""
	I0823 19:01:37.395645   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:37.395694   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:37.399822   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:37.399880   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:37.418837   42158 cri.go:89] found id: ""
	I0823 19:01:37.418860   42158 logs.go:284] 0 containers: []
	W0823 19:01:37.418870   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:37.418878   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:37.418932   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:37.438033   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:37.438060   42158 cri.go:89] found id: ""
	I0823 19:01:37.438069   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:37.438134   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:37.442554   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:37.442622   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:37.465115   42158 cri.go:89] found id: ""
	I0823 19:01:37.465143   42158 logs.go:284] 0 containers: []
	W0823 19:01:37.465152   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:37.465160   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:37.465221   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:37.485723   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:37.485744   42158 cri.go:89] found id: ""
	I0823 19:01:37.485753   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:37.485801   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:37.490635   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:37.490700   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:37.507984   42158 cri.go:89] found id: ""
	I0823 19:01:37.508009   42158 logs.go:284] 0 containers: []
	W0823 19:01:37.508017   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:37.508023   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:37.508069   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:37.525680   42158 cri.go:89] found id: ""
	I0823 19:01:37.525702   42158 logs.go:284] 0 containers: []
	W0823 19:01:37.525711   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:37.525727   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:37.525740   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:37.557744   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:37.557778   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:37.617335   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:37.617371   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:37.633370   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:37.633397   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:37.689497   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:37.689535   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:37.737283   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:37.737317   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:37.780779   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:37.780824   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:37.873525   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:37.873561   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:37.873576   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:37.897324   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:37.897359   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:40.415430   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:40.415978   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:40.416025   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:40.416075   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:40.438396   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:40.438419   42158 cri.go:89] found id: ""
	I0823 19:01:40.438429   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:40.438500   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:40.443538   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:40.443603   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:40.464165   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:40.464181   42158 cri.go:89] found id: ""
	I0823 19:01:40.464188   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:40.464247   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:40.468352   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:40.468414   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:40.490923   42158 cri.go:89] found id: ""
	I0823 19:01:40.490951   42158 logs.go:284] 0 containers: []
	W0823 19:01:40.490961   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:40.490968   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:40.491038   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:40.511355   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:40.511381   42158 cri.go:89] found id: ""
	I0823 19:01:40.511391   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:40.511440   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:40.515506   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:40.515566   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:40.538619   42158 cri.go:89] found id: ""
	I0823 19:01:40.538646   42158 logs.go:284] 0 containers: []
	W0823 19:01:40.538656   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:40.538663   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:40.538725   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:40.557812   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:40.557833   42158 cri.go:89] found id: ""
	I0823 19:01:40.557843   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:40.557886   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:40.563351   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:40.563420   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:40.583251   42158 cri.go:89] found id: ""
	I0823 19:01:40.583280   42158 logs.go:284] 0 containers: []
	W0823 19:01:40.583291   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:40.583297   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:40.583359   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:40.604398   42158 cri.go:89] found id: ""
	I0823 19:01:40.604421   42158 logs.go:284] 0 containers: []
	W0823 19:01:40.604428   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:40.604439   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:40.604451   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:40.644323   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:40.644355   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:40.668739   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:40.668770   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:40.679752   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:40.679791   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:40.701622   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:40.701651   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:40.727698   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:40.727733   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:40.768384   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:40.768422   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:40.840115   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:40.840149   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:40.907639   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:40.907683   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:40.995264   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:43.496202   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:43.496976   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:43.497032   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:43.497095   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:43.519823   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:43.519855   42158 cri.go:89] found id: ""
	I0823 19:01:43.519864   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:43.519924   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:43.526399   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:43.526467   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:43.548218   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:43.548243   42158 cri.go:89] found id: ""
	I0823 19:01:43.548252   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:43.548324   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:43.553783   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:43.553880   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:43.574546   42158 cri.go:89] found id: ""
	I0823 19:01:43.574578   42158 logs.go:284] 0 containers: []
	W0823 19:01:43.574589   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:43.574596   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:43.574659   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:43.597773   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:43.597797   42158 cri.go:89] found id: ""
	I0823 19:01:43.597805   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:43.597862   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:43.605064   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:43.605137   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:43.629817   42158 cri.go:89] found id: ""
	I0823 19:01:43.629848   42158 logs.go:284] 0 containers: []
	W0823 19:01:43.629859   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:43.629868   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:43.629939   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:43.653690   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:43.653715   42158 cri.go:89] found id: ""
	I0823 19:01:43.653725   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:43.653794   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:43.658917   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:43.658991   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:43.683834   42158 cri.go:89] found id: ""
	I0823 19:01:43.683864   42158 logs.go:284] 0 containers: []
	W0823 19:01:43.683874   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:43.683882   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:43.683941   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:43.705834   42158 cri.go:89] found id: ""
	I0823 19:01:43.705870   42158 logs.go:284] 0 containers: []
	W0823 19:01:43.705881   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:43.705899   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:43.705917   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:43.734493   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:43.734532   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:43.757922   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:43.757953   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:43.784708   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:43.784751   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:43.888107   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:43.888145   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:43.911024   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:43.911069   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:44.047087   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:44.047122   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:44.047137   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:44.110564   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:44.110618   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:44.160684   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:44.160730   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:46.740696   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:46.741328   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:46.741381   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:46.741438   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:46.761056   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:46.761078   42158 cri.go:89] found id: ""
	I0823 19:01:46.761086   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:46.761141   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:46.765283   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:46.765359   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:46.782366   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:46.782389   42158 cri.go:89] found id: ""
	I0823 19:01:46.782396   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:46.782454   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:46.786525   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:46.786602   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:46.809609   42158 cri.go:89] found id: ""
	I0823 19:01:46.809634   42158 logs.go:284] 0 containers: []
	W0823 19:01:46.809645   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:46.809651   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:46.809710   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:46.827901   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:46.827916   42158 cri.go:89] found id: ""
	I0823 19:01:46.827923   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:46.827971   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:46.832624   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:46.832689   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:46.848153   42158 cri.go:89] found id: ""
	I0823 19:01:46.848178   42158 logs.go:284] 0 containers: []
	W0823 19:01:46.848185   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:46.848190   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:46.848252   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:46.865629   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:46.865657   42158 cri.go:89] found id: ""
	I0823 19:01:46.865667   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:46.865727   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:46.869628   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:46.869698   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:46.888120   42158 cri.go:89] found id: ""
	I0823 19:01:46.888149   42158 logs.go:284] 0 containers: []
	W0823 19:01:46.888158   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:46.888166   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:46.888227   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:46.908581   42158 cri.go:89] found id: ""
	I0823 19:01:46.908667   42158 logs.go:284] 0 containers: []
	W0823 19:01:46.908683   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:46.908700   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:46.908717   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:46.921817   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:46.921845   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:47.004274   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:47.004302   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:47.004316   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:47.027351   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:47.027393   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:47.045403   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:47.045436   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:47.068578   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:47.068662   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:47.137798   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:47.137832   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:47.191857   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:47.191890   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:47.231553   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:47.231595   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:49.787071   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:49.787774   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:49.787878   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:49.787951   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:49.811779   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:49.811859   42158 cri.go:89] found id: ""
	I0823 19:01:49.811885   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:49.811963   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:49.817244   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:49.817371   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:49.839614   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:49.839711   42158 cri.go:89] found id: ""
	I0823 19:01:49.839734   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:49.839817   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:49.845249   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:49.845391   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:49.867794   42158 cri.go:89] found id: ""
	I0823 19:01:49.867822   42158 logs.go:284] 0 containers: []
	W0823 19:01:49.867833   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:49.867841   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:49.867909   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:49.888893   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:49.888914   42158 cri.go:89] found id: ""
	I0823 19:01:49.888922   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:49.888995   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:49.893402   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:49.893460   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:49.913148   42158 cri.go:89] found id: ""
	I0823 19:01:49.913178   42158 logs.go:284] 0 containers: []
	W0823 19:01:49.913188   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:49.913197   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:49.913261   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:49.938569   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:49.938598   42158 cri.go:89] found id: ""
	I0823 19:01:49.938607   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:49.938664   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:49.943217   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:49.943295   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:49.964417   42158 cri.go:89] found id: ""
	I0823 19:01:49.964445   42158 logs.go:284] 0 containers: []
	W0823 19:01:49.964461   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:49.964469   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:49.964536   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:49.986383   42158 cri.go:89] found id: ""
	I0823 19:01:49.986475   42158 logs.go:284] 0 containers: []
	W0823 19:01:49.986499   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:49.986542   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:49.986565   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:49.998013   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:49.998041   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:50.092639   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:50.092667   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:50.092681   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:50.148604   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:50.148645   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:50.178230   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:50.178258   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:50.247185   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:50.247222   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:50.271730   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:50.271775   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:50.291706   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:50.291740   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:50.338589   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:50.338627   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:52.878444   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:52.879216   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:52.879298   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:52.879384   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:52.900196   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:52.900238   42158 cri.go:89] found id: ""
	I0823 19:01:52.900247   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:52.900312   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:52.906405   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:52.906532   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:52.929300   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:52.929327   42158 cri.go:89] found id: ""
	I0823 19:01:52.929336   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:52.929411   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:52.935119   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:52.935241   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:52.963639   42158 cri.go:89] found id: ""
	I0823 19:01:52.963664   42158 logs.go:284] 0 containers: []
	W0823 19:01:52.963676   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:52.963684   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:52.963743   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:52.986439   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:52.986463   42158 cri.go:89] found id: ""
	I0823 19:01:52.986472   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:52.986527   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:52.990967   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:52.991034   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:53.013522   42158 cri.go:89] found id: ""
	I0823 19:01:53.013572   42158 logs.go:284] 0 containers: []
	W0823 19:01:53.013583   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:53.013592   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:53.013665   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:53.034609   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:53.034636   42158 cri.go:89] found id: ""
	I0823 19:01:53.034645   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:53.034738   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:53.039799   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:53.039942   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:53.057533   42158 cri.go:89] found id: ""
	I0823 19:01:53.057573   42158 logs.go:284] 0 containers: []
	W0823 19:01:53.057589   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:53.057597   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:53.057659   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:53.077474   42158 cri.go:89] found id: ""
	I0823 19:01:53.077506   42158 logs.go:284] 0 containers: []
	W0823 19:01:53.077516   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:53.077536   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:53.077563   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:53.165271   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:53.165361   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:53.165398   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:53.188589   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:53.188692   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:53.233807   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:53.233863   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:53.301624   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:53.301663   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:53.339097   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:53.339132   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:53.439788   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:53.439842   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:53.459929   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:53.459979   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:53.490869   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:53.490911   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:56.065613   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:56.066301   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:56.066364   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:56.066422   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:56.086119   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:56.086149   42158 cri.go:89] found id: ""
	I0823 19:01:56.086159   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:56.086232   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:56.090226   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:56.090285   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:56.107161   42158 cri.go:89] found id: "c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:56.107187   42158 cri.go:89] found id: ""
	I0823 19:01:56.107195   42158 logs.go:284] 1 containers: [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6]
	I0823 19:01:56.107253   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:56.111398   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:56.111465   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:56.136729   42158 cri.go:89] found id: ""
	I0823 19:01:56.136757   42158 logs.go:284] 0 containers: []
	W0823 19:01:56.136766   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:56.136773   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:56.136833   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:56.154599   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:56.154628   42158 cri.go:89] found id: ""
	I0823 19:01:56.154639   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:56.154704   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:56.159211   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:56.159296   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:56.176131   42158 cri.go:89] found id: ""
	I0823 19:01:56.176163   42158 logs.go:284] 0 containers: []
	W0823 19:01:56.176174   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:56.176181   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:56.176242   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:56.195224   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:56.195253   42158 cri.go:89] found id: ""
	I0823 19:01:56.195263   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:56.195320   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:56.199612   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:56.199698   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:56.215209   42158 cri.go:89] found id: ""
	I0823 19:01:56.215234   42158 logs.go:284] 0 containers: []
	W0823 19:01:56.215243   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:56.215251   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:56.215312   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:56.229537   42158 cri.go:89] found id: ""
	I0823 19:01:56.229572   42158 logs.go:284] 0 containers: []
	W0823 19:01:56.229582   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:56.229602   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:56.229618   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:56.254522   42158 logs.go:123] Gathering logs for etcd [c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6] ...
	I0823 19:01:56.254559   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c79aa27b93e26ea2f01a7686f19f76cd5f6aa280028e750c90a4e5460cc942f6"
	I0823 19:01:56.271896   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:56.271929   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:56.328513   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:56.328560   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:56.372600   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:56.372645   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:01:56.394275   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:56.394310   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:56.472335   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:56.472383   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:56.482678   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:56.482705   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:56.578167   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:56.578199   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:56.578213   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:59.152734   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:01:59.153385   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:01:59.153441   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:01:59.153499   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:01:59.174101   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:59.174128   42158 cri.go:89] found id: ""
	I0823 19:01:59.174137   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:01:59.174185   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:59.178159   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:01:59.178216   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:01:59.197181   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:01:59.197202   42158 cri.go:89] found id: ""
	I0823 19:01:59.197211   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:01:59.197261   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:59.201239   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:01:59.201305   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:01:59.224678   42158 cri.go:89] found id: ""
	I0823 19:01:59.224704   42158 logs.go:284] 0 containers: []
	W0823 19:01:59.224717   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:01:59.224725   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:01:59.224790   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:01:59.241831   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:59.241855   42158 cri.go:89] found id: ""
	I0823 19:01:59.241864   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:01:59.241925   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:59.250229   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:01:59.250307   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:01:59.267396   42158 cri.go:89] found id: ""
	I0823 19:01:59.267430   42158 logs.go:284] 0 containers: []
	W0823 19:01:59.267440   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:01:59.267448   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:01:59.267511   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:01:59.285068   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:59.285094   42158 cri.go:89] found id: ""
	I0823 19:01:59.285103   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:01:59.285159   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:01:59.289108   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:01:59.289174   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:01:59.304400   42158 cri.go:89] found id: ""
	I0823 19:01:59.304426   42158 logs.go:284] 0 containers: []
	W0823 19:01:59.304435   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:01:59.304441   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:01:59.304504   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:01:59.320923   42158 cri.go:89] found id: ""
	I0823 19:01:59.320954   42158 logs.go:284] 0 containers: []
	W0823 19:01:59.320964   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:01:59.320983   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:01:59.321002   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:01:59.337771   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:01:59.337806   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:01:59.370253   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:01:59.370287   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:01:59.423288   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:01:59.423326   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:01:59.502151   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:01:59.502185   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:01:59.515846   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:01:59.515885   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:01:59.607277   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:01:59.607298   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:01:59.607309   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:01:59.630618   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:01:59.630650   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:01:59.680105   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:01:59.680133   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:02.204362   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:02.204942   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:02.204998   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:02.205048   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:02.220287   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:02.220314   42158 cri.go:89] found id: ""
	I0823 19:02:02.220323   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:02.220387   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:02.225063   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:02.225130   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:02.239531   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:02.239547   42158 cri.go:89] found id: ""
	I0823 19:02:02.239554   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:02.239593   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:02.243706   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:02.243753   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:02.262239   42158 cri.go:89] found id: ""
	I0823 19:02:02.262263   42158 logs.go:284] 0 containers: []
	W0823 19:02:02.262269   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:02.262275   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:02.262322   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:02.281740   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:02.281769   42158 cri.go:89] found id: ""
	I0823 19:02:02.281782   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:02.281842   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:02.285872   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:02.285956   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:02.300857   42158 cri.go:89] found id: ""
	I0823 19:02:02.300882   42158 logs.go:284] 0 containers: []
	W0823 19:02:02.300891   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:02.300897   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:02.300945   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:02.315217   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:02.315233   42158 cri.go:89] found id: ""
	I0823 19:02:02.315239   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:02.315283   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:02.319130   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:02.319201   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:02.338605   42158 cri.go:89] found id: ""
	I0823 19:02:02.338625   42158 logs.go:284] 0 containers: []
	W0823 19:02:02.338632   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:02.338637   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:02.338686   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:02.353297   42158 cri.go:89] found id: ""
	I0823 19:02:02.353316   42158 logs.go:284] 0 containers: []
	W0823 19:02:02.353322   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:02.353333   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:02.353344   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:02.419572   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:02.419606   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:02.443315   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:02.443344   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:02.463531   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:02.463566   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:02.499723   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:02.499763   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:02.558192   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:02.558227   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:02.568166   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:02.568196   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:02.644169   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:02.644201   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:02.644214   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:02.687355   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:02.687386   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:05.208093   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:05.208764   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:05.208806   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:05.208869   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:05.230602   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:05.230627   42158 cri.go:89] found id: ""
	I0823 19:02:05.230637   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:05.230707   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:05.236097   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:05.236170   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:05.260447   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:05.260483   42158 cri.go:89] found id: ""
	I0823 19:02:05.260495   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:05.260558   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:05.265412   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:05.265479   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:05.285031   42158 cri.go:89] found id: ""
	I0823 19:02:05.285154   42158 logs.go:284] 0 containers: []
	W0823 19:02:05.285184   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:05.285208   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:05.285310   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:05.306259   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:05.306283   42158 cri.go:89] found id: ""
	I0823 19:02:05.306292   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:05.306345   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:05.311599   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:05.311679   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:05.332589   42158 cri.go:89] found id: ""
	I0823 19:02:05.332617   42158 logs.go:284] 0 containers: []
	W0823 19:02:05.332627   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:05.332634   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:05.332695   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:05.352288   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:05.352317   42158 cri.go:89] found id: ""
	I0823 19:02:05.352326   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:05.352385   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:05.357360   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:05.357430   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:05.375288   42158 cri.go:89] found id: ""
	I0823 19:02:05.375322   42158 logs.go:284] 0 containers: []
	W0823 19:02:05.375332   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:05.375339   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:05.375398   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:05.393098   42158 cri.go:89] found id: ""
	I0823 19:02:05.393124   42158 logs.go:284] 0 containers: []
	W0823 19:02:05.393134   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:05.393150   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:05.393167   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:05.476666   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:05.476685   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:05.476697   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:05.498972   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:05.498999   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:05.540129   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:05.540161   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:05.591048   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:05.591083   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:05.684866   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:05.684957   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:05.698862   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:05.698899   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:05.716896   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:05.716932   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:05.766238   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:05.766272   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:08.299295   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:08.299941   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:08.299998   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:08.300074   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:08.318787   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:08.318811   42158 cri.go:89] found id: ""
	I0823 19:02:08.318821   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:08.318880   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:08.323464   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:08.323543   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:08.350736   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:08.350761   42158 cri.go:89] found id: ""
	I0823 19:02:08.350770   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:08.350827   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:08.355700   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:08.355770   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:08.373372   42158 cri.go:89] found id: ""
	I0823 19:02:08.373401   42158 logs.go:284] 0 containers: []
	W0823 19:02:08.373412   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:08.373421   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:08.373480   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:08.394533   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:08.394558   42158 cri.go:89] found id: ""
	I0823 19:02:08.394567   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:08.394625   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:08.399908   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:08.399982   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:08.419313   42158 cri.go:89] found id: ""
	I0823 19:02:08.419341   42158 logs.go:284] 0 containers: []
	W0823 19:02:08.419351   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:08.419359   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:08.419422   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:08.440830   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:08.440861   42158 cri.go:89] found id: ""
	I0823 19:02:08.440870   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:08.440920   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:08.444725   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:08.444784   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:08.462791   42158 cri.go:89] found id: ""
	I0823 19:02:08.462817   42158 logs.go:284] 0 containers: []
	W0823 19:02:08.462826   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:08.462833   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:08.462894   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:08.480986   42158 cri.go:89] found id: ""
	I0823 19:02:08.481006   42158 logs.go:284] 0 containers: []
	W0823 19:02:08.481012   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:08.481025   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:08.481035   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:08.558552   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:08.558587   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:08.653537   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:08.653579   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:08.653596   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:08.683638   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:08.683675   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:08.721069   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:08.721103   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:08.748807   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:08.748838   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:08.758909   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:08.758938   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:08.780296   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:08.780338   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:08.842709   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:08.842741   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:11.405580   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:11.406173   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:11.406218   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:11.406262   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:11.425984   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:11.426006   42158 cri.go:89] found id: ""
	I0823 19:02:11.426013   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:11.426058   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:11.429932   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:11.429987   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:11.449388   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:11.449407   42158 cri.go:89] found id: ""
	I0823 19:02:11.449414   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:11.449467   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:11.454356   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:11.454429   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:11.470592   42158 cri.go:89] found id: ""
	I0823 19:02:11.470619   42158 logs.go:284] 0 containers: []
	W0823 19:02:11.470629   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:11.470636   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:11.470701   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:11.487194   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:11.487220   42158 cri.go:89] found id: ""
	I0823 19:02:11.487230   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:11.487288   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:11.491038   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:11.491088   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:11.506377   42158 cri.go:89] found id: ""
	I0823 19:02:11.506404   42158 logs.go:284] 0 containers: []
	W0823 19:02:11.506414   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:11.506425   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:11.506483   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:11.529386   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:11.529406   42158 cri.go:89] found id: ""
	I0823 19:02:11.529412   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:11.529459   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:11.534415   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:11.534464   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:11.551939   42158 cri.go:89] found id: ""
	I0823 19:02:11.551962   42158 logs.go:284] 0 containers: []
	W0823 19:02:11.551969   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:11.551975   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:11.552028   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:11.570359   42158 cri.go:89] found id: ""
	I0823 19:02:11.570379   42158 logs.go:284] 0 containers: []
	W0823 19:02:11.570386   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:11.570402   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:11.570415   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:11.611653   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:11.611685   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:11.646158   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:11.646199   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:11.663546   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:11.663574   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:11.722414   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:11.722439   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:11.745694   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:11.745722   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:11.816908   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:11.816946   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:11.829440   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:11.829472   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:11.942433   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:11.942457   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:11.942479   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:14.469731   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:14.470404   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:14.470459   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:14.470520   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:14.497421   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:14.497450   42158 cri.go:89] found id: ""
	I0823 19:02:14.497460   42158 logs.go:284] 1 containers: [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:14.497535   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:14.504354   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:14.504441   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:14.527547   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:14.527578   42158 cri.go:89] found id: ""
	I0823 19:02:14.527588   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:14.527656   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:14.533265   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:14.533346   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:14.563132   42158 cri.go:89] found id: ""
	I0823 19:02:14.563164   42158 logs.go:284] 0 containers: []
	W0823 19:02:14.563175   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:14.563183   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:14.563250   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:14.590589   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:14.590610   42158 cri.go:89] found id: ""
	I0823 19:02:14.590617   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:14.590668   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:14.598831   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:14.598906   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:14.630125   42158 cri.go:89] found id: ""
	I0823 19:02:14.630151   42158 logs.go:284] 0 containers: []
	W0823 19:02:14.630161   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:14.630168   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:14.630230   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:14.659287   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:14.659310   42158 cri.go:89] found id: ""
	I0823 19:02:14.659319   42158 logs.go:284] 1 containers: [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:14.659377   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:14.666377   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:14.666448   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:14.686678   42158 cri.go:89] found id: ""
	I0823 19:02:14.686707   42158 logs.go:284] 0 containers: []
	W0823 19:02:14.686715   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:14.686722   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:14.686788   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:14.708518   42158 cri.go:89] found id: ""
	I0823 19:02:14.708544   42158 logs.go:284] 0 containers: []
	W0823 19:02:14.708552   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:14.708566   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:14.708580   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:14.833200   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:14.833225   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:14.833241   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:14.863331   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:14.863366   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:14.891255   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:14.891283   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:14.971956   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:14.972002   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:14.986946   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:14.986975   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:15.040568   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:15.040602   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:15.084849   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:15.084894   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:15.166269   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:15.166317   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:17.702038   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:22.702845   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0823 19:02:22.702926   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:22.702992   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:22.722892   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:22.722914   42158 cri.go:89] found id: "75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:22.722922   42158 cri.go:89] found id: ""
	I0823 19:02:22.722930   42158 logs.go:284] 2 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451]
	I0823 19:02:22.722983   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.728079   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.732777   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:22.732848   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:22.760531   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:22.760558   42158 cri.go:89] found id: ""
	I0823 19:02:22.760568   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:22.760626   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.766467   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:22.766542   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:22.789371   42158 cri.go:89] found id: ""
	I0823 19:02:22.789399   42158 logs.go:284] 0 containers: []
	W0823 19:02:22.789414   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:22.789422   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:22.789480   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:22.810504   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:22.810538   42158 cri.go:89] found id: ""
	I0823 19:02:22.810548   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:22.810607   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.815837   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:22.815911   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:22.837243   42158 cri.go:89] found id: ""
	I0823 19:02:22.837270   42158 logs.go:284] 0 containers: []
	W0823 19:02:22.837279   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:22.837286   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:22.837344   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:22.857331   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:22.857366   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:22.857373   42158 cri.go:89] found id: ""
	I0823 19:02:22.857382   42158 logs.go:284] 2 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:22.857443   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.861733   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:22.867360   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:22.867426   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:22.888341   42158 cri.go:89] found id: ""
	I0823 19:02:22.888380   42158 logs.go:284] 0 containers: []
	W0823 19:02:22.888390   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:22.888409   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:22.888470   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:22.910043   42158 cri.go:89] found id: ""
	I0823 19:02:22.910069   42158 logs.go:284] 0 containers: []
	W0823 19:02:22.910079   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:22.910090   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:22.910106   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:22.923414   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:22.923455   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:22.947192   42158 logs.go:123] Gathering logs for kube-apiserver [75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451] ...
	I0823 19:02:22.947231   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 75f3249dad04bdbff67d765d69bf69b5227f5069560d0a346220bcf485fb3451"
	I0823 19:02:22.974242   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:22.974274   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:23.049042   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:23.049088   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:23.077102   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:23.077134   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:23.160787   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:23.160824   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0823 19:02:39.058249   42158 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (15.89739688s)
	W0823 19:02:39.058304   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:39.058317   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:39.058331   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:39.079192   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:39.079232   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:39.129312   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:39.129369   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:39.149667   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:39.149706   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:41.692103   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:41.692624   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:41.692670   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:41.692712   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:41.711455   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:41.711482   42158 cri.go:89] found id: ""
	I0823 19:02:41.711491   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:41.711553   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:41.716561   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:41.716631   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:41.734612   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:41.734639   42158 cri.go:89] found id: ""
	I0823 19:02:41.734649   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:41.734705   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:41.739393   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:41.739473   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:41.755391   42158 cri.go:89] found id: ""
	I0823 19:02:41.755416   42158 logs.go:284] 0 containers: []
	W0823 19:02:41.755426   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:41.755433   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:41.755485   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:41.773069   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:41.773092   42158 cri.go:89] found id: ""
	I0823 19:02:41.773101   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:41.773163   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:41.777010   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:41.777076   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:41.801379   42158 cri.go:89] found id: ""
	I0823 19:02:41.801407   42158 logs.go:284] 0 containers: []
	W0823 19:02:41.801417   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:41.801425   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:41.801481   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:41.822032   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:41.822054   42158 cri.go:89] found id: "ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:41.822061   42158 cri.go:89] found id: ""
	I0823 19:02:41.822069   42158 logs.go:284] 2 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6]
	I0823 19:02:41.822119   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:41.832279   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:41.839621   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:41.839689   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:41.855065   42158 cri.go:89] found id: ""
	I0823 19:02:41.855086   42158 logs.go:284] 0 containers: []
	W0823 19:02:41.855112   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:41.855119   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:41.855170   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:41.876089   42158 cri.go:89] found id: ""
	I0823 19:02:41.876112   42158 logs.go:284] 0 containers: []
	W0823 19:02:41.876121   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:41.876138   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:41.876179   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:41.887312   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:41.887350   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:41.909832   42158 logs.go:123] Gathering logs for kube-controller-manager [ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6] ...
	I0823 19:02:41.909861   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ec3a86d042f9fe5256efd116dd1750d3d0299a146c3fab6edb84b75e65135cc6"
	I0823 19:02:41.956296   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:41.956338   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:42.017968   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:42.018005   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:42.101640   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:42.101666   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:42.101681   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:42.123783   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:42.123816   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:42.189593   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:42.189633   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:42.222686   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:42.222720   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:42.276322   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:42.276355   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:44.802376   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:44.803152   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:44.803210   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:44.803271   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:44.821575   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:44.821606   42158 cri.go:89] found id: ""
	I0823 19:02:44.821615   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:44.821676   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:44.826795   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:44.826867   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:44.843813   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:44.843840   42158 cri.go:89] found id: ""
	I0823 19:02:44.843850   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:44.843906   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:44.847562   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:44.847635   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:44.865874   42158 cri.go:89] found id: ""
	I0823 19:02:44.865902   42158 logs.go:284] 0 containers: []
	W0823 19:02:44.865916   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:44.865923   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:44.865985   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:44.888364   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:44.888389   42158 cri.go:89] found id: ""
	I0823 19:02:44.888400   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:44.888462   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:44.892922   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:44.892998   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:44.917945   42158 cri.go:89] found id: ""
	I0823 19:02:44.918029   42158 logs.go:284] 0 containers: []
	W0823 19:02:44.918054   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:44.918072   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:44.918144   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:44.942020   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:44.942050   42158 cri.go:89] found id: ""
	I0823 19:02:44.942060   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:02:44.942116   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:44.947512   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:44.947588   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:44.968162   42158 cri.go:89] found id: ""
	I0823 19:02:44.968188   42158 logs.go:284] 0 containers: []
	W0823 19:02:44.968198   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:44.968206   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:44.968271   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:44.991480   42158 cri.go:89] found id: ""
	I0823 19:02:44.991506   42158 logs.go:284] 0 containers: []
	W0823 19:02:44.991516   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:44.991536   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:44.991552   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:45.036074   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:45.036117   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:45.121917   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:45.121967   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:45.215887   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:45.215929   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:45.250370   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:45.250412   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:45.272452   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:45.272482   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:45.331195   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:45.331240   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:45.353438   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:45.353477   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:45.366092   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:45.366128   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:45.441744   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:47.942715   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:47.943350   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:47.943407   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:47.943473   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:47.964024   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:47.964051   42158 cri.go:89] found id: ""
	I0823 19:02:47.964061   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:47.964125   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:47.969324   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:47.969396   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:47.990217   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:47.990239   42158 cri.go:89] found id: ""
	I0823 19:02:47.990247   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:47.990301   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:47.994825   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:47.994882   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:48.014369   42158 cri.go:89] found id: ""
	I0823 19:02:48.014396   42158 logs.go:284] 0 containers: []
	W0823 19:02:48.014404   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:48.014413   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:48.014470   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:48.033837   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:48.033861   42158 cri.go:89] found id: ""
	I0823 19:02:48.033925   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:48.033985   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:48.038734   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:48.038793   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:48.058946   42158 cri.go:89] found id: ""
	I0823 19:02:48.058967   42158 logs.go:284] 0 containers: []
	W0823 19:02:48.058975   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:48.058982   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:48.059036   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:48.084958   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:48.084981   42158 cri.go:89] found id: ""
	I0823 19:02:48.084991   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:02:48.085050   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:48.090221   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:48.090286   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:48.112991   42158 cri.go:89] found id: ""
	I0823 19:02:48.113017   42158 logs.go:284] 0 containers: []
	W0823 19:02:48.113027   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:48.113036   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:48.113097   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:48.132870   42158 cri.go:89] found id: ""
	I0823 19:02:48.132909   42158 logs.go:284] 0 containers: []
	W0823 19:02:48.132920   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:48.132938   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:48.132956   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:48.203252   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:48.203293   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:48.318654   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:48.318683   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:48.318698   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:48.341354   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:48.341434   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:48.375969   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:48.376008   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:48.390665   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:48.390706   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:48.416317   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:48.416369   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:48.476114   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:48.476149   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:48.525218   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:48.525264   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:51.104006   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:51.104761   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:51.104822   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:51.104869   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:51.128922   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:51.128947   42158 cri.go:89] found id: ""
	I0823 19:02:51.128955   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:51.129011   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:51.133390   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:51.133453   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:51.150350   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:51.150375   42158 cri.go:89] found id: ""
	I0823 19:02:51.150384   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:51.150473   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:51.154444   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:51.154499   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:51.169119   42158 cri.go:89] found id: ""
	I0823 19:02:51.169137   42158 logs.go:284] 0 containers: []
	W0823 19:02:51.169143   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:51.169149   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:51.169194   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:51.184855   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:51.184873   42158 cri.go:89] found id: ""
	I0823 19:02:51.184879   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:51.184917   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:51.188538   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:51.188579   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:51.202463   42158 cri.go:89] found id: ""
	I0823 19:02:51.202479   42158 logs.go:284] 0 containers: []
	W0823 19:02:51.202485   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:51.202491   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:51.202535   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:51.218024   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:51.218043   42158 cri.go:89] found id: ""
	I0823 19:02:51.218049   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:02:51.218095   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:51.221874   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:51.221926   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:51.239085   42158 cri.go:89] found id: ""
	I0823 19:02:51.239112   42158 logs.go:284] 0 containers: []
	W0823 19:02:51.239123   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:51.239130   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:51.239190   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:51.254253   42158 cri.go:89] found id: ""
	I0823 19:02:51.254280   42158 logs.go:284] 0 containers: []
	W0823 19:02:51.254291   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:51.254310   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:51.254328   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:51.328279   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:51.328307   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:51.328323   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:51.352992   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:51.353028   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:51.372301   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:51.372324   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:51.412556   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:51.412586   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:51.467195   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:51.467227   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:51.522970   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:51.523003   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:51.533125   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:51.533155   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:51.562530   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:51.562563   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:54.083006   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:54.083588   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:54.083633   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:54.083678   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:54.101527   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:54.101562   42158 cri.go:89] found id: ""
	I0823 19:02:54.101571   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:54.101626   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:54.106391   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:54.106463   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:54.125435   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:54.125454   42158 cri.go:89] found id: ""
	I0823 19:02:54.125462   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:54.125516   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:54.129217   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:54.129278   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:54.146271   42158 cri.go:89] found id: ""
	I0823 19:02:54.146299   42158 logs.go:284] 0 containers: []
	W0823 19:02:54.146308   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:54.146315   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:54.146378   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:54.168196   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:54.168226   42158 cri.go:89] found id: ""
	I0823 19:02:54.168236   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:54.168288   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:54.172789   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:54.172854   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:54.189115   42158 cri.go:89] found id: ""
	I0823 19:02:54.189137   42158 logs.go:284] 0 containers: []
	W0823 19:02:54.189143   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:54.189148   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:54.189200   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:54.205676   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:54.205702   42158 cri.go:89] found id: ""
	I0823 19:02:54.205711   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:02:54.205767   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:54.210394   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:54.210456   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:54.224818   42158 cri.go:89] found id: ""
	I0823 19:02:54.224840   42158 logs.go:284] 0 containers: []
	W0823 19:02:54.224849   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:54.224857   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:54.224912   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:54.241722   42158 cri.go:89] found id: ""
	I0823 19:02:54.241744   42158 logs.go:284] 0 containers: []
	W0823 19:02:54.241754   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:54.241772   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:54.241785   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:54.251963   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:54.251998   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:54.326471   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:02:54.326501   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:54.326517   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:54.367376   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:54.367419   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:54.390815   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:54.390846   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:54.450243   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:54.450275   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:54.508689   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:54.508726   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:54.535230   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:54.535264   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:54.552745   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:54.552769   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:57.084122   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:02:57.084706   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:02:57.084751   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:02:57.084797   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:02:57.100988   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:57.101008   42158 cri.go:89] found id: ""
	I0823 19:02:57.101017   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:02:57.101069   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:57.104917   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:02:57.104980   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:02:57.121850   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:57.121877   42158 cri.go:89] found id: ""
	I0823 19:02:57.121897   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:02:57.121961   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:57.125659   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:02:57.125734   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:02:57.141350   42158 cri.go:89] found id: ""
	I0823 19:02:57.141375   42158 logs.go:284] 0 containers: []
	W0823 19:02:57.141384   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:02:57.141392   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:02:57.141446   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:02:57.157308   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:57.157327   42158 cri.go:89] found id: ""
	I0823 19:02:57.157333   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:02:57.157386   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:57.160962   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:02:57.161025   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:02:57.176778   42158 cri.go:89] found id: ""
	I0823 19:02:57.176803   42158 logs.go:284] 0 containers: []
	W0823 19:02:57.176813   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:02:57.176821   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:02:57.176875   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:02:57.193536   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:57.193570   42158 cri.go:89] found id: ""
	I0823 19:02:57.193579   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:02:57.193638   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:02:57.197177   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:02:57.197234   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:02:57.212869   42158 cri.go:89] found id: ""
	I0823 19:02:57.212898   42158 logs.go:284] 0 containers: []
	W0823 19:02:57.212907   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:02:57.212915   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:02:57.212983   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:02:57.230181   42158 cri.go:89] found id: ""
	I0823 19:02:57.230200   42158 logs.go:284] 0 containers: []
	W0823 19:02:57.230207   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:02:57.230219   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:02:57.230229   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:02:57.252916   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:02:57.252941   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:02:57.295943   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:02:57.295977   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:02:57.313862   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:02:57.313897   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:02:57.344052   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:02:57.344081   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:02:57.399502   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:02:57.399533   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:02:57.431896   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:02:57.431927   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:02:57.493932   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:02:57.493974   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:02:57.504244   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:02:57.504269   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:02:57.579398   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:00.080120   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:00.080732   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:00.080771   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:00.080826   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:00.099299   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:00.099325   42158 cri.go:89] found id: ""
	I0823 19:03:00.099333   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:00.099392   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:00.103934   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:00.104008   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:00.121005   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:00.121027   42158 cri.go:89] found id: ""
	I0823 19:03:00.121035   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:00.121088   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:00.125108   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:00.125175   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:00.141352   42158 cri.go:89] found id: ""
	I0823 19:03:00.141375   42158 logs.go:284] 0 containers: []
	W0823 19:03:00.141382   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:00.141388   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:00.141434   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:00.157097   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:00.157122   42158 cri.go:89] found id: ""
	I0823 19:03:00.157129   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:00.157185   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:00.160991   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:00.161049   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:00.177004   42158 cri.go:89] found id: ""
	I0823 19:03:00.177028   42158 logs.go:284] 0 containers: []
	W0823 19:03:00.177034   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:00.177040   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:00.177095   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:00.191830   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:00.191849   42158 cri.go:89] found id: ""
	I0823 19:03:00.191858   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:00.191913   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:00.195682   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:00.195737   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:00.216305   42158 cri.go:89] found id: ""
	I0823 19:03:00.216324   42158 logs.go:284] 0 containers: []
	W0823 19:03:00.216331   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:00.216339   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:00.216406   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:00.236565   42158 cri.go:89] found id: ""
	I0823 19:03:00.236601   42158 logs.go:284] 0 containers: []
	W0823 19:03:00.236611   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:00.236626   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:00.236639   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:00.299211   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:00.299248   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:00.310240   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:00.310267   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:00.335528   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:00.335561   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:00.354394   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:00.354430   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:00.428562   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:00.428588   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:00.428603   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:00.477161   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:00.477195   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:00.508552   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:00.508585   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:00.577526   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:00.577566   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:03.109606   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:03.110507   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:03.110571   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:03.110632   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:03.138370   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:03.138393   42158 cri.go:89] found id: ""
	I0823 19:03:03.138402   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:03.138461   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:03.144386   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:03.144456   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:03.174010   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:03.174034   42158 cri.go:89] found id: ""
	I0823 19:03:03.174042   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:03.174099   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:03.180507   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:03.180582   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:03.202745   42158 cri.go:89] found id: ""
	I0823 19:03:03.202766   42158 logs.go:284] 0 containers: []
	W0823 19:03:03.202774   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:03.202779   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:03.202838   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:03.223525   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:03.223544   42158 cri.go:89] found id: ""
	I0823 19:03:03.223552   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:03.223596   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:03.228772   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:03.228833   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:03.248460   42158 cri.go:89] found id: ""
	I0823 19:03:03.248483   42158 logs.go:284] 0 containers: []
	W0823 19:03:03.248489   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:03.248494   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:03.248539   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:03.271438   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:03.271465   42158 cri.go:89] found id: ""
	I0823 19:03:03.271473   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:03.271533   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:03.278536   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:03.278609   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:03.307559   42158 cri.go:89] found id: ""
	I0823 19:03:03.307586   42158 logs.go:284] 0 containers: []
	W0823 19:03:03.307595   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:03.307604   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:03.307669   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:03.333341   42158 cri.go:89] found id: ""
	I0823 19:03:03.333368   42158 logs.go:284] 0 containers: []
	W0823 19:03:03.333379   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:03.333398   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:03.333417   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:03.432165   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:03.432211   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:03.447639   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:03.447677   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:03.475055   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:03.475085   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:03.496768   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:03.496849   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:03.592137   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:03.592165   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:03.592179   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:03.655317   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:03.655355   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:03.701152   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:03.701190   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:03.786679   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:03.786718   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:06.322211   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:06.322896   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:06.322952   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:06.323016   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:06.347387   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:06.347421   42158 cri.go:89] found id: ""
	I0823 19:03:06.347431   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:06.347499   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:06.351895   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:06.351969   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:06.372024   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:06.372060   42158 cri.go:89] found id: ""
	I0823 19:03:06.372071   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:06.372132   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:06.376799   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:06.376876   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:06.398290   42158 cri.go:89] found id: ""
	I0823 19:03:06.398320   42158 logs.go:284] 0 containers: []
	W0823 19:03:06.398332   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:06.398341   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:06.398421   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:06.415499   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:06.415521   42158 cri.go:89] found id: ""
	I0823 19:03:06.415529   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:06.415590   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:06.420240   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:06.420323   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:06.439520   42158 cri.go:89] found id: ""
	I0823 19:03:06.439548   42158 logs.go:284] 0 containers: []
	W0823 19:03:06.439559   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:06.439567   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:06.439623   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:06.458914   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:06.458938   42158 cri.go:89] found id: ""
	I0823 19:03:06.458947   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:06.459011   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:06.463981   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:06.464069   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:06.486459   42158 cri.go:89] found id: ""
	I0823 19:03:06.486490   42158 logs.go:284] 0 containers: []
	W0823 19:03:06.486500   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:06.486508   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:06.486571   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:06.507303   42158 cri.go:89] found id: ""
	I0823 19:03:06.507327   42158 logs.go:284] 0 containers: []
	W0823 19:03:06.507336   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:06.507352   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:06.507381   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:06.531214   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:06.531242   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:06.578009   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:06.578047   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:06.648937   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:06.648973   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:06.714964   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:06.715009   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:06.729350   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:06.729394   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:06.763044   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:06.763088   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:06.870706   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:06.870734   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:06.870752   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:06.909512   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:06.909557   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:09.438528   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:09.439245   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:09.439303   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:09.439364   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:09.461257   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:09.461280   42158 cri.go:89] found id: ""
	I0823 19:03:09.461289   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:09.461343   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:09.466537   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:09.466624   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:09.483439   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:09.483456   42158 cri.go:89] found id: ""
	I0823 19:03:09.483462   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:09.483502   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:09.487764   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:09.487840   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:09.509389   42158 cri.go:89] found id: ""
	I0823 19:03:09.509427   42158 logs.go:284] 0 containers: []
	W0823 19:03:09.509438   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:09.509446   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:09.509515   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:09.529657   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:09.529679   42158 cri.go:89] found id: ""
	I0823 19:03:09.529685   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:09.529736   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:09.533927   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:09.533986   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:09.551230   42158 cri.go:89] found id: ""
	I0823 19:03:09.551257   42158 logs.go:284] 0 containers: []
	W0823 19:03:09.551268   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:09.551281   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:09.551344   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:09.572572   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:09.572608   42158 cri.go:89] found id: ""
	I0823 19:03:09.572619   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:09.572692   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:09.578438   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:09.578504   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:09.598347   42158 cri.go:89] found id: ""
	I0823 19:03:09.598365   42158 logs.go:284] 0 containers: []
	W0823 19:03:09.598374   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:09.598381   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:09.598437   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:09.617795   42158 cri.go:89] found id: ""
	I0823 19:03:09.617823   42158 logs.go:284] 0 containers: []
	W0823 19:03:09.617833   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:09.617854   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:09.617871   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:09.634760   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:09.634794   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:09.684359   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:09.684398   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:09.706145   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:09.706175   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:09.765965   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:09.766012   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:09.839716   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:09.839754   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:09.851279   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:09.851315   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:09.930801   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:09.930826   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:09.930840   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:09.951192   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:09.951224   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:12.484868   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:12.486280   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:12.486334   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:12.486394   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:12.504274   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:12.504301   42158 cri.go:89] found id: ""
	I0823 19:03:12.504320   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:12.504375   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:12.508614   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:12.508717   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:12.525724   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:12.525756   42158 cri.go:89] found id: ""
	I0823 19:03:12.525769   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:12.525826   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:12.529738   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:12.529809   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:12.545365   42158 cri.go:89] found id: ""
	I0823 19:03:12.545391   42158 logs.go:284] 0 containers: []
	W0823 19:03:12.545400   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:12.545408   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:12.545456   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:12.561295   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:12.561323   42158 cri.go:89] found id: ""
	I0823 19:03:12.561332   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:12.561391   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:12.565302   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:12.565379   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:12.585932   42158 cri.go:89] found id: ""
	I0823 19:03:12.585950   42158 logs.go:284] 0 containers: []
	W0823 19:03:12.585957   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:12.585962   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:12.586016   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:12.606324   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:12.606342   42158 cri.go:89] found id: ""
	I0823 19:03:12.606349   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:12.606401   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:12.610556   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:12.610623   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:12.631603   42158 cri.go:89] found id: ""
	I0823 19:03:12.631627   42158 logs.go:284] 0 containers: []
	W0823 19:03:12.631636   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:12.631645   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:12.631703   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:12.653310   42158 cri.go:89] found id: ""
	I0823 19:03:12.653338   42158 logs.go:284] 0 containers: []
	W0823 19:03:12.653345   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:12.653359   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:12.653372   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:12.690740   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:12.690772   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:12.714247   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:12.714275   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:12.761200   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:12.761239   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:12.846473   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:12.846495   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:12.846506   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:12.863867   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:12.863897   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:12.937140   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:12.937204   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:12.961980   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:12.962020   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:13.041820   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:13.041865   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:15.555244   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:15.555847   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:15.555908   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:03:15.555974   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:03:15.577298   42158 cri.go:89] found id: "30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:15.577323   42158 cri.go:89] found id: ""
	I0823 19:03:15.577329   42158 logs.go:284] 1 containers: [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8]
	I0823 19:03:15.577383   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:15.582619   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:03:15.582683   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:03:15.600356   42158 cri.go:89] found id: "3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:15.600381   42158 cri.go:89] found id: ""
	I0823 19:03:15.600390   42158 logs.go:284] 1 containers: [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057]
	I0823 19:03:15.600441   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:15.604571   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:03:15.604643   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:03:15.622391   42158 cri.go:89] found id: ""
	I0823 19:03:15.622419   42158 logs.go:284] 0 containers: []
	W0823 19:03:15.622429   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:03:15.622437   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:03:15.622494   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:03:15.639502   42158 cri.go:89] found id: "17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:15.639526   42158 cri.go:89] found id: ""
	I0823 19:03:15.639535   42158 logs.go:284] 1 containers: [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7]
	I0823 19:03:15.639594   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:15.643299   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:03:15.643367   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:03:15.658846   42158 cri.go:89] found id: ""
	I0823 19:03:15.658874   42158 logs.go:284] 0 containers: []
	W0823 19:03:15.658883   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:03:15.658890   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:03:15.658946   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:03:15.680540   42158 cri.go:89] found id: "0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:15.680562   42158 cri.go:89] found id: ""
	I0823 19:03:15.680569   42158 logs.go:284] 1 containers: [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d]
	I0823 19:03:15.680622   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:03:15.685639   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:03:15.685703   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:03:15.704302   42158 cri.go:89] found id: ""
	I0823 19:03:15.704323   42158 logs.go:284] 0 containers: []
	W0823 19:03:15.704334   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:03:15.704341   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:03:15.704394   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:03:15.740058   42158 cri.go:89] found id: ""
	I0823 19:03:15.740086   42158 logs.go:284] 0 containers: []
	W0823 19:03:15.740096   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:03:15.740115   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:03:15.740131   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:03:15.807005   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:03:15.807040   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:03:15.897593   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:03:15.897618   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:03:15.897629   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:03:15.957069   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:03:15.957109   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:03:15.984871   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:03:15.984905   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:03:15.995021   42158 logs.go:123] Gathering logs for kube-apiserver [30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8] ...
	I0823 19:03:15.995049   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 30834bb8b6d1c6ff5285b58c5ddcd21c64a0d8530f31a481ac2c09291e9a25a8"
	I0823 19:03:16.015882   42158 logs.go:123] Gathering logs for etcd [3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057] ...
	I0823 19:03:16.015914   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3feafd637ec7868b704060af8cd376f2b1ad2d20531f22efabe7e32468293057"
	I0823 19:03:16.031974   42158 logs.go:123] Gathering logs for kube-scheduler [17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7] ...
	I0823 19:03:16.032008   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 17cb7b848951390b246c76e94e36fde7ef8f49c75745e705b8e6471617fad7a7"
	I0823 19:03:16.086342   42158 logs.go:123] Gathering logs for kube-controller-manager [0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d] ...
	I0823 19:03:16.086374   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0cc76078361dda07bb30ae236f90d99abc33a23c3185de8f9969cc4cfb29090d"
	I0823 19:03:18.629360   42158 api_server.go:253] Checking apiserver healthz at https://192.168.72.172:8443/healthz ...
	I0823 19:03:18.630014   42158 api_server.go:269] stopped: https://192.168.72.172:8443/healthz: Get "https://192.168.72.172:8443/healthz": dial tcp 192.168.72.172:8443: connect: connection refused
	I0823 19:03:18.630075   42158 kubeadm.go:640] restartCluster took 4m12.111144978s
	W0823 19:03:18.630158   42158 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0823 19:03:18.630188   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0823 19:03:19.807367   42158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.177159116s)
	I0823 19:03:19.807420   42158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 19:03:19.819931   42158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 19:03:19.827967   42158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 19:03:19.837399   42158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 19:03:19.837446   42158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 19:03:19.934401   42158 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0823 19:03:19.934473   42158 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 19:03:20.135374   42158 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 19:03:20.135537   42158 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 19:03:20.135685   42158 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 19:03:20.250088   42158 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 19:03:20.252916   42158 out.go:204]   - Generating certificates and keys ...
	I0823 19:03:20.253054   42158 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 19:03:20.253139   42158 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 19:03:20.253219   42158 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0823 19:03:20.253279   42158 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0823 19:03:20.253354   42158 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0823 19:03:20.253411   42158 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0823 19:03:20.254715   42158 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0823 19:03:20.254804   42158 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0823 19:03:20.254908   42158 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0823 19:03:20.255063   42158 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0823 19:03:20.255122   42158 kubeadm.go:322] [certs] Using the existing "sa" key
	I0823 19:03:20.255202   42158 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 19:03:20.478141   42158 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 19:03:20.571649   42158 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 19:03:20.782321   42158 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 19:03:21.057657   42158 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 19:03:21.072477   42158 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 19:03:21.074019   42158 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 19:03:21.074112   42158 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 19:03:21.243776   42158 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 19:03:21.245595   42158 out.go:204]   - Booting up control plane ...
	I0823 19:03:21.245723   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 19:03:21.257705   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 19:03:21.259160   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 19:03:21.260273   42158 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 19:03:21.266114   42158 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 19:04:01.266861   42158 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0823 19:07:21.279912   42158 kubeadm.go:322] 
	I0823 19:07:21.279989   42158 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0823 19:07:21.280572   42158 kubeadm.go:322] 		timed out waiting for the condition
	I0823 19:07:21.280595   42158 kubeadm.go:322] 
	I0823 19:07:21.280670   42158 kubeadm.go:322] 	This error is likely caused by:
	I0823 19:07:21.280749   42158 kubeadm.go:322] 		- The kubelet is not running
	I0823 19:07:21.280938   42158 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0823 19:07:21.280965   42158 kubeadm.go:322] 
	I0823 19:07:21.281116   42158 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0823 19:07:21.281170   42158 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0823 19:07:21.281212   42158 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0823 19:07:21.281223   42158 kubeadm.go:322] 
	I0823 19:07:21.281318   42158 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0823 19:07:21.281446   42158 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0823 19:07:21.281464   42158 kubeadm.go:322] 
	I0823 19:07:21.281639   42158 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0823 19:07:21.281734   42158 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0823 19:07:21.281859   42158 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0823 19:07:21.281976   42158 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0823 19:07:21.281991   42158 kubeadm.go:322] 
	I0823 19:07:21.284138   42158 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 19:07:21.284254   42158 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0823 19:07:21.284416   42158 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0823 19:07:21.284475   42158 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0823 19:07:21.284515   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0823 19:07:22.348277   42158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.063733241s)
	I0823 19:07:22.348358   42158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 19:07:22.360909   42158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 19:07:22.372035   42158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 19:07:22.372082   42158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 19:07:22.468019   42158 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0823 19:07:22.468090   42158 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 19:07:22.631652   42158 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 19:07:22.631807   42158 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 19:07:22.631951   42158 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 19:07:22.770838   42158 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 19:07:22.773854   42158 out.go:204]   - Generating certificates and keys ...
	I0823 19:07:22.774007   42158 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 19:07:22.774115   42158 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 19:07:22.774241   42158 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0823 19:07:22.774344   42158 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0823 19:07:22.774463   42158 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0823 19:07:22.774530   42158 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0823 19:07:22.774625   42158 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0823 19:07:22.774740   42158 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0823 19:07:22.774946   42158 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0823 19:07:22.776228   42158 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0823 19:07:22.776664   42158 kubeadm.go:322] [certs] Using the existing "sa" key
	I0823 19:07:22.776742   42158 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 19:07:23.193760   42158 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 19:07:23.329929   42158 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 19:07:23.509768   42158 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 19:07:23.672819   42158 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 19:07:23.697842   42158 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 19:07:23.698853   42158 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 19:07:23.698945   42158 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 19:07:23.898519   42158 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 19:07:23.900483   42158 out.go:204]   - Booting up control plane ...
	I0823 19:07:23.900625   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 19:07:23.917670   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 19:07:23.919743   42158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 19:07:23.921004   42158 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 19:07:23.924163   42158 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 19:08:03.922807   42158 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0823 19:11:23.928843   42158 kubeadm.go:322] 
	I0823 19:11:23.928929   42158 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0823 19:11:23.929049   42158 kubeadm.go:322] 		timed out waiting for the condition
	I0823 19:11:23.929063   42158 kubeadm.go:322] 
	I0823 19:11:23.929108   42158 kubeadm.go:322] 	This error is likely caused by:
	I0823 19:11:23.929159   42158 kubeadm.go:322] 		- The kubelet is not running
	I0823 19:11:23.929318   42158 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0823 19:11:23.929342   42158 kubeadm.go:322] 
	I0823 19:11:23.929491   42158 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0823 19:11:23.929559   42158 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0823 19:11:23.929618   42158 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0823 19:11:23.929635   42158 kubeadm.go:322] 
	I0823 19:11:23.929781   42158 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0823 19:11:23.929926   42158 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0823 19:11:23.929939   42158 kubeadm.go:322] 
	I0823 19:11:23.930098   42158 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0823 19:11:23.930236   42158 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0823 19:11:23.930340   42158 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0823 19:11:23.930454   42158 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0823 19:11:23.930468   42158 kubeadm.go:322] 
	I0823 19:11:23.931866   42158 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 19:11:23.931955   42158 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0823 19:11:23.932029   42158 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0823 19:11:23.932085   42158 kubeadm.go:406] StartCluster complete in 12m17.436579576s
	I0823 19:11:23.932126   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0823 19:11:23.932180   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0823 19:11:23.954663   42158 cri.go:89] found id: "1b1c6998077db7fa5179fce260a7e73bec29c9093701291f1490acc4fdc89375"
	I0823 19:11:23.954685   42158 cri.go:89] found id: ""
	I0823 19:11:23.954694   42158 logs.go:284] 1 containers: [1b1c6998077db7fa5179fce260a7e73bec29c9093701291f1490acc4fdc89375]
	I0823 19:11:23.954753   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:11:23.958982   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0823 19:11:23.959047   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0823 19:11:23.975491   42158 cri.go:89] found id: "ddc386c842cc37f955ac15178428edd8922d6e126592abae2b507385cdac8110"
	I0823 19:11:23.975508   42158 cri.go:89] found id: ""
	I0823 19:11:23.975517   42158 logs.go:284] 1 containers: [ddc386c842cc37f955ac15178428edd8922d6e126592abae2b507385cdac8110]
	I0823 19:11:23.975559   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:11:23.979812   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0823 19:11:23.979863   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0823 19:11:23.993735   42158 cri.go:89] found id: ""
	I0823 19:11:23.993758   42158 logs.go:284] 0 containers: []
	W0823 19:11:23.993766   42158 logs.go:286] No container was found matching "coredns"
	I0823 19:11:23.993774   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0823 19:11:23.993814   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0823 19:11:24.008880   42158 cri.go:89] found id: "88bcd0f52c613ceff3664e130c28fd877546ff0412e2a48b51d26b6bb44301b7"
	I0823 19:11:24.008896   42158 cri.go:89] found id: ""
	I0823 19:11:24.008903   42158 logs.go:284] 1 containers: [88bcd0f52c613ceff3664e130c28fd877546ff0412e2a48b51d26b6bb44301b7]
	I0823 19:11:24.008946   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:11:24.012605   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0823 19:11:24.012641   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0823 19:11:24.027636   42158 cri.go:89] found id: ""
	I0823 19:11:24.027661   42158 logs.go:284] 0 containers: []
	W0823 19:11:24.027670   42158 logs.go:286] No container was found matching "kube-proxy"
	I0823 19:11:24.027676   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0823 19:11:24.027730   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0823 19:11:24.042260   42158 cri.go:89] found id: "dccf017cb587c97c25c65fc13bb22c7eb422fe4505649171732b338d9d172708"
	I0823 19:11:24.042275   42158 cri.go:89] found id: ""
	I0823 19:11:24.042283   42158 logs.go:284] 1 containers: [dccf017cb587c97c25c65fc13bb22c7eb422fe4505649171732b338d9d172708]
	I0823 19:11:24.042332   42158 ssh_runner.go:195] Run: which crictl
	I0823 19:11:24.046109   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0823 19:11:24.046148   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0823 19:11:24.061439   42158 cri.go:89] found id: ""
	I0823 19:11:24.061460   42158 logs.go:284] 0 containers: []
	W0823 19:11:24.061469   42158 logs.go:286] No container was found matching "kindnet"
	I0823 19:11:24.061477   42158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0823 19:11:24.061519   42158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0823 19:11:24.077124   42158 cri.go:89] found id: ""
	I0823 19:11:24.077145   42158 logs.go:284] 0 containers: []
	W0823 19:11:24.077150   42158 logs.go:286] No container was found matching "storage-provisioner"
	I0823 19:11:24.077167   42158 logs.go:123] Gathering logs for etcd [ddc386c842cc37f955ac15178428edd8922d6e126592abae2b507385cdac8110] ...
	I0823 19:11:24.077183   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ddc386c842cc37f955ac15178428edd8922d6e126592abae2b507385cdac8110"
	I0823 19:11:24.092443   42158 logs.go:123] Gathering logs for kube-scheduler [88bcd0f52c613ceff3664e130c28fd877546ff0412e2a48b51d26b6bb44301b7] ...
	I0823 19:11:24.092475   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 88bcd0f52c613ceff3664e130c28fd877546ff0412e2a48b51d26b6bb44301b7"
	I0823 19:11:24.166540   42158 logs.go:123] Gathering logs for containerd ...
	I0823 19:11:24.166576   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0823 19:11:24.230896   42158 logs.go:123] Gathering logs for container status ...
	I0823 19:11:24.230933   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0823 19:11:24.255920   42158 logs.go:123] Gathering logs for kubelet ...
	I0823 19:11:24.255943   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0823 19:11:24.314921   42158 logs.go:123] Gathering logs for dmesg ...
	I0823 19:11:24.314955   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0823 19:11:24.325270   42158 logs.go:123] Gathering logs for describe nodes ...
	I0823 19:11:24.325312   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0823 19:11:24.408834   42158 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0823 19:11:24.408881   42158 logs.go:123] Gathering logs for kube-apiserver [1b1c6998077db7fa5179fce260a7e73bec29c9093701291f1490acc4fdc89375] ...
	I0823 19:11:24.408896   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1b1c6998077db7fa5179fce260a7e73bec29c9093701291f1490acc4fdc89375"
	I0823 19:11:24.437118   42158 logs.go:123] Gathering logs for kube-controller-manager [dccf017cb587c97c25c65fc13bb22c7eb422fe4505649171732b338d9d172708] ...
	I0823 19:11:24.437148   42158 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 dccf017cb587c97c25c65fc13bb22c7eb422fe4505649171732b338d9d172708"
	W0823 19:11:24.474826   42158 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0823 19:11:24.474879   42158 out.go:239] * 
	* 
	W0823 19:11:24.474939   42158 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:11:24.474960   42158 out.go:239] * 
	* 
	W0823 19:11:24.475776   42158 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 19:11:24.478583   42158 out.go:177] 
	W0823 19:11:24.479881   42158 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0823 19:11:24.479937   42158 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0823 19:11:24.479959   42158 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0823 19:11:24.481355   42158 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.22.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-228249 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1019.49s)

                                                
                                    

Test pass (264/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 60.06
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.0/json-events 22.53
11 TestDownloadOnly/v1.28.0/preload-exists 0
15 TestDownloadOnly/v1.28.0/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.52
20 TestOffline 90.38
22 TestAddons/Setup 144.86
24 TestAddons/parallel/Registry 21.91
25 TestAddons/parallel/Ingress 27.82
26 TestAddons/parallel/InspektorGadget 11.25
27 TestAddons/parallel/MetricsServer 6.1
28 TestAddons/parallel/HelmTiller 14.7
30 TestAddons/parallel/CSI 49.35
31 TestAddons/parallel/Headlamp 18.84
32 TestAddons/parallel/CloudSpanner 5.72
35 TestAddons/serial/GCPAuth/Namespaces 0.13
36 TestAddons/StoppedEnableDisable 92.46
37 TestCertOptions 50.41
38 TestCertExpiration 243.06
40 TestForceSystemdFlag 67.2
41 TestForceSystemdEnv 91.64
43 TestKVMDriverInstallOrUpdate 7.25
47 TestErrorSpam/setup 47.6
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.69
50 TestErrorSpam/pause 1.37
51 TestErrorSpam/unpause 1.51
52 TestErrorSpam/stop 1.46
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 59.37
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 39.39
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
64 TestFunctional/serial/CacheCmd/cache/add_local 3.4
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 44.96
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.33
75 TestFunctional/serial/LogsFileCmd 1.29
76 TestFunctional/serial/InvalidService 4.95
78 TestFunctional/parallel/ConfigCmd 0.28
79 TestFunctional/parallel/DashboardCmd 30.83
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.16
82 TestFunctional/parallel/StatusCmd 0.9
86 TestFunctional/parallel/ServiceCmdConnect 11.47
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 55.18
90 TestFunctional/parallel/SSHCmd 0.43
91 TestFunctional/parallel/CpCmd 0.85
92 TestFunctional/parallel/MySQL 35.67
93 TestFunctional/parallel/FileSync 0.2
94 TestFunctional/parallel/CertSync 1.35
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
102 TestFunctional/parallel/License 0.8
103 TestFunctional/parallel/Version/short 0.13
104 TestFunctional/parallel/Version/components 0.68
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
109 TestFunctional/parallel/ImageCommands/ImageBuild 5.4
110 TestFunctional/parallel/ImageCommands/Setup 2.76
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.17
123 TestFunctional/parallel/ServiceCmd/List 0.27
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.03
127 TestFunctional/parallel/ServiceCmd/Format 0.29
128 TestFunctional/parallel/ServiceCmd/URL 0.29
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
133 TestFunctional/parallel/ProfileCmd/profile_list 0.27
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
135 TestFunctional/parallel/MountCmd/any-port 28.9
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.2
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.42
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
140 TestFunctional/parallel/MountCmd/specific-port 1.91
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.18
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 98.08
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.03
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.79
155 TestJSONOutput/start/Command 99.55
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.62
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.57
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.08
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.18
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 101.02
187 TestMountStart/serial/StartWithMountFirst 28.04
188 TestMountStart/serial/VerifyMountFirst 0.38
189 TestMountStart/serial/StartWithMountSecond 29.74
190 TestMountStart/serial/VerifyMountSecond 0.36
191 TestMountStart/serial/DeleteFirst 0.85
192 TestMountStart/serial/VerifyMountPostDelete 0.37
193 TestMountStart/serial/Stop 1.19
194 TestMountStart/serial/RestartStopped 24.59
195 TestMountStart/serial/VerifyMountPostStop 0.38
198 TestMultiNode/serial/FreshStart2Nodes 189.83
199 TestMultiNode/serial/DeployApp2Nodes 6.33
200 TestMultiNode/serial/PingHostFrom2Pods 0.85
201 TestMultiNode/serial/AddNode 46.04
202 TestMultiNode/serial/ProfileList 0.2
203 TestMultiNode/serial/CopyFile 7.25
204 TestMultiNode/serial/StopNode 2.21
205 TestMultiNode/serial/StartAfterStop 27.32
206 TestMultiNode/serial/RestartKeepsNodes 312.45
207 TestMultiNode/serial/DeleteNode 1.74
208 TestMultiNode/serial/StopMultiNode 183.73
209 TestMultiNode/serial/RestartMultiNode 89.87
210 TestMultiNode/serial/ValidateNameConflict 50.31
215 TestPreload 336.93
217 TestScheduledStopUnix 117.9
223 TestKubernetesUpgrade 210.1
226 TestStoppedBinaryUpgrade/Setup 3.28
227 TestPause/serial/Start 156.26
229 TestPause/serial/SecondStartNoReconfiguration 7.42
230 TestPause/serial/Pause 0.67
231 TestPause/serial/VerifyStatus 0.25
232 TestPause/serial/Unpause 0.79
233 TestPause/serial/PauseAgain 0.71
234 TestPause/serial/DeletePaused 0.98
235 TestPause/serial/VerifyDeletedResources 11.34
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
238 TestNoKubernetes/serial/StartWithK8s 50.86
246 TestNetworkPlugins/group/false 3.21
250 TestNoKubernetes/serial/StartWithStopK8s 66.87
251 TestNoKubernetes/serial/Start 33.88
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
253 TestNoKubernetes/serial/ProfileList 15.51
254 TestNoKubernetes/serial/Stop 1.23
255 TestNoKubernetes/serial/StartNoArgs 26.83
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
264 TestNetworkPlugins/group/auto/Start 89.05
265 TestNetworkPlugins/group/kindnet/Start 72.14
266 TestNetworkPlugins/group/auto/KubeletFlags 0.22
267 TestNetworkPlugins/group/auto/NetCatPod 11.47
268 TestNetworkPlugins/group/auto/DNS 0.24
269 TestNetworkPlugins/group/auto/Localhost 0.2
270 TestNetworkPlugins/group/auto/HairPin 0.17
271 TestNetworkPlugins/group/calico/Start 95.93
272 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
273 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
274 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
275 TestNetworkPlugins/group/kindnet/DNS 0.18
276 TestNetworkPlugins/group/kindnet/Localhost 0.15
277 TestNetworkPlugins/group/kindnet/HairPin 0.17
278 TestNetworkPlugins/group/custom-flannel/Start 89.15
279 TestNetworkPlugins/group/calico/ControllerPod 5.04
280 TestNetworkPlugins/group/calico/KubeletFlags 0.22
281 TestNetworkPlugins/group/calico/NetCatPod 10.43
282 TestNetworkPlugins/group/calico/DNS 0.18
283 TestNetworkPlugins/group/calico/Localhost 0.14
284 TestNetworkPlugins/group/calico/HairPin 0.15
285 TestNetworkPlugins/group/enable-default-cni/Start 104.73
286 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
287 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
288 TestNetworkPlugins/group/custom-flannel/DNS 0.18
289 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
290 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
291 TestNetworkPlugins/group/flannel/Start 87.34
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
297 TestNetworkPlugins/group/flannel/ControllerPod 5.02
298 TestNetworkPlugins/group/bridge/Start 102.56
299 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
300 TestNetworkPlugins/group/flannel/NetCatPod 10.4
301 TestNetworkPlugins/group/flannel/DNS 0.16
302 TestNetworkPlugins/group/flannel/Localhost 0.14
303 TestNetworkPlugins/group/flannel/HairPin 0.15
305 TestStartStop/group/old-k8s-version/serial/FirstStart 136.61
306 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
307 TestNetworkPlugins/group/bridge/NetCatPod 11.43
308 TestNetworkPlugins/group/bridge/DNS 0.16
309 TestNetworkPlugins/group/bridge/Localhost 0.15
310 TestNetworkPlugins/group/bridge/HairPin 0.14
312 TestStartStop/group/no-preload/serial/FirstStart 118.31
313 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
314 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
315 TestStartStop/group/old-k8s-version/serial/Stop 102.41
316 TestStartStop/group/no-preload/serial/DeployApp 11.5
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
318 TestStartStop/group/no-preload/serial/Stop 92.36
319 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.57
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/old-k8s-version/serial/SecondStart 450.42
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
325 TestStartStop/group/no-preload/serial/SecondStart 334.38
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.21
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 329.58
332 TestStartStop/group/newest-cni/serial/FirstStart 78.78
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
335 TestStartStop/group/newest-cni/serial/Stop 2.09
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/newest-cni/serial/SecondStart 49.33
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/newest-cni/serial/Pause 2.44
343 TestStartStop/group/embed-certs/serial/FirstStart 64.37
344 TestStartStop/group/embed-certs/serial/DeployApp 10.54
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.02
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
347 TestStartStop/group/embed-certs/serial/Stop 91.74
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
349 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
350 TestStartStop/group/no-preload/serial/Pause 2.48
351 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
353 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
354 TestStartStop/group/old-k8s-version/serial/Pause 2.35
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
356 TestStartStop/group/embed-certs/serial/SecondStart 328.89
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.02
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
364 TestStartStop/group/embed-certs/serial/Pause 2.42
x
+
TestDownloadOnly/v1.16.0/json-events (60.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-969338 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-969338 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m0.06240587s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (60.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-969338
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-969338: exit status 85 (56.564181ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-969338 | jenkins | v1.31.2 | 23 Aug 23 18:12 UTC |          |
	|         | -p download-only-969338        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 18:12:37
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 18:12:37.470102   18384 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:12:37.470229   18384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:12:37.470237   18384 out.go:309] Setting ErrFile to fd 2...
	I0823 18:12:37.470241   18384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:12:37.470425   18384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	W0823 18:12:37.470542   18384 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17086-11104/.minikube/config/config.json: open /home/jenkins/minikube-integration/17086-11104/.minikube/config/config.json: no such file or directory
	I0823 18:12:37.471100   18384 out.go:303] Setting JSON to true
	I0823 18:12:37.471896   18384 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3301,"bootTime":1692811056,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:12:37.471959   18384 start.go:138] virtualization: kvm guest
	I0823 18:12:37.474125   18384 out.go:97] [download-only-969338] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 18:12:37.475463   18384 out.go:169] MINIKUBE_LOCATION=17086
	W0823 18:12:37.474225   18384 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball: no such file or directory
	I0823 18:12:37.474275   18384 notify.go:220] Checking for updates...
	I0823 18:12:37.478006   18384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:12:37.479300   18384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:12:37.480531   18384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:12:37.481658   18384 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0823 18:12:37.484588   18384 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0823 18:12:37.484844   18384 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 18:12:37.586037   18384 out.go:97] Using the kvm2 driver based on user configuration
	I0823 18:12:37.586064   18384 start.go:298] selected driver: kvm2
	I0823 18:12:37.586082   18384 start.go:902] validating driver "kvm2" against <nil>
	I0823 18:12:37.586437   18384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:12:37.586553   18384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0823 18:12:37.600683   18384 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0823 18:12:37.600726   18384 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 18:12:37.601211   18384 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0823 18:12:37.601371   18384 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 18:12:37.601407   18384 cni.go:84] Creating CNI manager for ""
	I0823 18:12:37.601419   18384 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 18:12:37.601428   18384 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 18:12:37.601438   18384 start_flags.go:319] config:
	{Name:download-only-969338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-969338 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:12:37.601669   18384 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:12:37.603512   18384 out.go:97] Downloading VM boot image ...
	I0823 18:12:37.603541   18384 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	E0823 18:12:37.920488   18384 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso.sha256 Dst:/home/jenkins/minikube-integration/17086-11104/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x416e560 0x416e560 0x416e560 0x416e560 0x416e560 0x416e560 0x416e560] Decompressors:map[bz2:0xc0004b5e90 gz:0xc0004b5e98 tar:0xc0004b5e40 tar.bz2:0xc0004b5e50 tar.gz:0xc0004b5e60 tar.xz:0xc0004b5e70 tar.zst:0xc0004b5e80 tbz2:0xc0004b5e50 tgz:0xc0004b5e60 txz:0xc0004b5e70 tzst:0xc0004b5e80 xz:0xc0004b5ea0 zip:0xc0004b5eb0 zst:0xc0004b5ea8] Getters:map[file:0xc000b82300 http:0xc0005c0460 https:0xc0005c04b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
: invalid checksum: Error downloading checksum file: bad response code: 404
	I0823 18:12:37.920535   18384 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:12:37.922337   18384 out.go:97] Downloading VM boot image ...
	I0823 18:12:37.922376   18384 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0823 18:12:48.994222   18384 out.go:97] Starting control plane node download-only-969338 in cluster download-only-969338
	I0823 18:12:48.994244   18384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0823 18:12:49.152691   18384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0823 18:12:49.152721   18384 cache.go:57] Caching tarball of preloaded images
	I0823 18:12:49.152877   18384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0823 18:12:49.154764   18384 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0823 18:12:49.154790   18384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:12:49.319678   18384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0823 18:13:06.696789   18384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:13:06.696879   18384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:13:07.603122   18384 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0823 18:13:07.603437   18384 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/download-only-969338/config.json ...
	I0823 18:13:07.603463   18384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/download-only-969338/config.json: {Name:mk15d2ce1afe41404dae5d86313ef82ad093c979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 18:13:07.603618   18384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0823 18:13:07.603786   18384 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-969338"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/json-events (22.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-969338 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-969338 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (22.527862708s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-969338
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-969338: exit status 85 (52.831869ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-969338 | jenkins | v1.31.2 | 23 Aug 23 18:12 UTC |          |
	|         | -p download-only-969338        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-969338 | jenkins | v1.31.2 | 23 Aug 23 18:13 UTC |          |
	|         | -p download-only-969338        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 18:13:37
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 18:13:37.592074   18560 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:13:37.592192   18560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:13:37.592200   18560 out.go:309] Setting ErrFile to fd 2...
	I0823 18:13:37.592204   18560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:13:37.592386   18560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	W0823 18:13:37.592490   18560 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17086-11104/.minikube/config/config.json: open /home/jenkins/minikube-integration/17086-11104/.minikube/config/config.json: no such file or directory
	I0823 18:13:37.592904   18560 out.go:303] Setting JSON to true
	I0823 18:13:37.593694   18560 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3362,"bootTime":1692811056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:13:37.593745   18560 start.go:138] virtualization: kvm guest
	I0823 18:13:37.595918   18560 out.go:97] [download-only-969338] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 18:13:37.597557   18560 out.go:169] MINIKUBE_LOCATION=17086
	I0823 18:13:37.596106   18560 notify.go:220] Checking for updates...
	I0823 18:13:37.599113   18560 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:13:37.600531   18560 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:13:37.601841   18560 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:13:37.603588   18560 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0823 18:13:37.606293   18560 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0823 18:13:37.606683   18560 config.go:182] Loaded profile config "download-only-969338": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0823 18:13:37.606727   18560 start.go:810] api.Load failed for download-only-969338: filestore "download-only-969338": Docker machine "download-only-969338" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0823 18:13:37.606799   18560 driver.go:373] Setting default libvirt URI to qemu:///system
	W0823 18:13:37.606827   18560 start.go:810] api.Load failed for download-only-969338: filestore "download-only-969338": Docker machine "download-only-969338" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0823 18:13:37.639975   18560 out.go:97] Using the kvm2 driver based on existing profile
	I0823 18:13:37.640000   18560 start.go:298] selected driver: kvm2
	I0823 18:13:37.640005   18560 start.go:902] validating driver "kvm2" against &{Name:download-only-969338 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.16.0 ClusterName:download-only-969338 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:13:37.640398   18560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:13:37.640462   18560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0823 18:13:37.654709   18560 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0823 18:13:37.655398   18560 cni.go:84] Creating CNI manager for ""
	I0823 18:13:37.655414   18560 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0823 18:13:37.655424   18560 start_flags.go:319] config:
	{Name:download-only-969338 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-969338 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:13:37.655590   18560 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 18:13:37.657232   18560 out.go:97] Starting control plane node download-only-969338 in cluster download-only-969338
	I0823 18:13:37.657243   18560 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0823 18:13:38.313612   18560 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0823 18:13:38.313659   18560 cache.go:57] Caching tarball of preloaded images
	I0823 18:13:38.313800   18560 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0823 18:13:38.315940   18560 out.go:97] Downloading Kubernetes v1.28.0 preload ...
	I0823 18:13:38.315968   18560 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0823 18:13:38.474268   18560 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-969338"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-969338
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.52s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-624337 --alsologtostderr --binary-mirror http://127.0.0.1:38393 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-624337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-624337
--- PASS: TestBinaryMirror (0.52s)

                                                
                                    
x
+
TestOffline (90.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-224433 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-224433 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m29.303436515s)
helpers_test.go:175: Cleaning up "offline-containerd-224433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-224433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-224433: (1.077110298s)
--- PASS: TestOffline (90.38s)

                                                
                                    
x
+
TestAddons/Setup (144.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-789637 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-789637 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.855756313s)
--- PASS: TestAddons/Setup (144.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 23.321489ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tmlcc" [f10a962a-ba23-43cd-9c2e-fa3d12ad2122] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01656779s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-twgsb" [4304a4eb-7652-4b18-ad10-13673a9743ce] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015070243s
addons_test.go:316: (dbg) Run:  kubectl --context addons-789637 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-789637 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-789637 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.058892606s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 ip
2023/08/23 18:16:47 [DEBUG] GET http://192.168.39.71:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-789637 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context addons-789637 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (2.028688009s)
addons_test.go:208: (dbg) Run:  kubectl --context addons-789637 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-789637 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2f14baa9-1742-4615-b212-171906e4c2bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2f14baa9-1742-4615-b212-171906e4c2bc] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.019562475s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-789637 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.71
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-789637 addons disable ingress-dns --alsologtostderr -v=1: (2.207454124s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-789637 addons disable ingress --alsologtostderr -v=1: (7.830764101s)
--- PASS: TestAddons/parallel/Ingress (27.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rxkhq" [39acdcec-30a7-402f-9dd9-92af12bdb5ff] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010950706s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-789637
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-789637: (6.236870604s)
--- PASS: TestAddons/parallel/InspektorGadget (11.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.1s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 23.332547ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xxwpm" [24e2780a-6000-4685-8f87-002431afe590] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018595121s
addons_test.go:391: (dbg) Run:  kubectl --context addons-789637 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.10s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.7s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 23.44475ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-z96mh" [6776908b-cd42-41c3-a354-d5e4dad16122] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019230364s
addons_test.go:449: (dbg) Run:  kubectl --context addons-789637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-789637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.993107965s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.342381ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-789637 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-789637 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c0bf43bb-6fd7-48fa-b016-53b237d6ac3a] Pending
helpers_test.go:344: "task-pv-pod" [c0bf43bb-6fd7-48fa-b016-53b237d6ac3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c0bf43bb-6fd7-48fa-b016-53b237d6ac3a] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.026677762s
addons_test.go:560: (dbg) Run:  kubectl --context addons-789637 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-789637 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-789637 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-789637 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-789637 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-789637 delete pod task-pv-pod: (1.252222551s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-789637 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-789637 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-789637 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [71c76981-d409-4a52-9702-9bb6bc5883f9] Pending
helpers_test.go:344: "task-pv-pod-restore" [71c76981-d409-4a52-9702-9bb6bc5883f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [71c76981-d409-4a52-9702-9bb6bc5883f9] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.022280232s
addons_test.go:602: (dbg) Run:  kubectl --context addons-789637 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-789637 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-789637 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-789637 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.744308342s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-789637 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-789637 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-789637 --alsologtostderr -v=1: (1.796536153s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-sfxnc" [15cb99f9-1c5a-489a-a8c6-631bee21450c] Pending
helpers_test.go:344: "headlamp-699c48fb74-sfxnc" [15cb99f9-1c5a-489a-a8c6-631bee21450c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-sfxnc" [15cb99f9-1c5a-489a-a8c6-631bee21450c] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.04320328s
--- PASS: TestAddons/parallel/Headlamp (18.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-t7v2k" [fbb9fada-6553-4fff-bb5f-0d0a195e05af] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015948271s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-789637
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-789637 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-789637 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-789637
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-789637: (1m32.210245745s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-789637
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-789637
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-789637
--- PASS: TestAddons/StoppedEnableDisable (92.46s)

                                                
                                    
x
+
TestCertOptions (50.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-931593 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-931593 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (48.933618133s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-931593 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-931593 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-931593 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-931593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-931593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-931593: (1.015709937s)
--- PASS: TestCertOptions (50.41s)

                                                
                                    
x
+
TestCertExpiration (243.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-921179 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-921179 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (53.609910516s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-921179 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-921179 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (8.250353448s)
helpers_test.go:175: Cleaning up "cert-expiration-921179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-921179
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-921179: (1.195664525s)
--- PASS: TestCertExpiration (243.06s)

                                                
                                    
x
+
TestForceSystemdFlag (67.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-194992 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0823 18:58:00.842526   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-194992 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m5.9045443s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-194992 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-194992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-194992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-194992: (1.076923507s)
--- PASS: TestForceSystemdFlag (67.20s)

                                                
                                    
x
+
TestForceSystemdEnv (91.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-050010 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0823 18:55:53.326095   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:56:25.869287   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-050010 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m30.330208339s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-050010 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-050010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-050010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-050010: (1.066531667s)
--- PASS: TestForceSystemdEnv (91.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.25s)

                                                
                                    
x
+
TestErrorSpam/setup (47.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-566113 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-566113 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-566113 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-566113 --driver=kvm2  --container-runtime=containerd: (47.598461849s)
--- PASS: TestErrorSpam/setup (47.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 stop: (1.337727732s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-566113 --log_dir /tmp/nospam-566113 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/test/nested/copy/18372/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-573778 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (59.365323778s)
--- PASS: TestFunctional/serial/StartWithProxy (59.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --alsologtostderr -v=8
E0823 18:21:25.869943   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:25.875736   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:25.886019   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:25.906208   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:25.947149   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:26.027506   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:26.187919   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:26.508165   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:27.148815   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:28.429806   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:30.990506   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:36.110833   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:21:46.351766   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-573778 --alsologtostderr -v=8: (39.385679292s)
functional_test.go:659: soft start took 39.386250964s for "functional-573778" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-573778 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:3.1: (1.259672377s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:3.3: (1.382084744s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 cache add registry.k8s.io/pause:latest: (1.309574038s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-573778 /tmp/TestFunctionalserialCacheCmdcacheadd_local773482762/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache add minikube-local-cache-test:functional-573778
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 cache add minikube-local-cache-test:functional-573778: (3.101320873s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache delete minikube-local-cache-test:functional-573778
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-573778
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.182344ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 cache reload: (1.33832212s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0823 18:22:06.832367   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 kubectl -- --context functional-573778 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-573778 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0823 18:22:47.793521   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-573778 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.956121662s)
functional_test.go:757: restart took 44.956233725s for "functional-573778" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-573778 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 logs: (1.3277487s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 logs --file /tmp/TestFunctionalserialLogsFileCmd3633072509/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 logs --file /tmp/TestFunctionalserialLogsFileCmd3633072509/001/logs.txt: (1.285208822s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-573778 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-573778
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-573778: exit status 115 (287.275674ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.135:32018 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-573778 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-573778 delete -f testdata/invalidsvc.yaml: (1.322120885s)
--- PASS: TestFunctional/serial/InvalidService (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 config get cpus: exit status 14 (46.710879ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 config get cpus: exit status 14 (41.1664ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-573778 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-573778 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25349: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-573778 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (131.097243ms)

                                                
                                                
-- stdout --
	* [functional-573778] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:23:13.641298   24850 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:23:13.641426   24850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:23:13.641434   24850 out.go:309] Setting ErrFile to fd 2...
	I0823 18:23:13.641438   24850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:23:13.641664   24850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:23:13.642228   24850 out.go:303] Setting JSON to false
	I0823 18:23:13.643194   24850 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3938,"bootTime":1692811056,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:23:13.643250   24850 start.go:138] virtualization: kvm guest
	I0823 18:23:13.645204   24850 out.go:177] * [functional-573778] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 18:23:13.647100   24850 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 18:23:13.647101   24850 notify.go:220] Checking for updates...
	I0823 18:23:13.648446   24850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:23:13.649809   24850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:23:13.651118   24850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:23:13.653066   24850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 18:23:13.654319   24850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 18:23:13.656168   24850 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:23:13.656699   24850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:23:13.656757   24850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:23:13.672744   24850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0823 18:23:13.673114   24850 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:23:13.673791   24850 main.go:141] libmachine: Using API Version  1
	I0823 18:23:13.673811   24850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:23:13.674261   24850 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:23:13.674431   24850 main.go:141] libmachine: (functional-573778) Calling .DriverName
	I0823 18:23:13.674694   24850 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 18:23:13.675060   24850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:23:13.675108   24850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:23:13.691783   24850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0823 18:23:13.692159   24850 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:23:13.692687   24850 main.go:141] libmachine: Using API Version  1
	I0823 18:23:13.692713   24850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:23:13.693053   24850 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:23:13.693240   24850 main.go:141] libmachine: (functional-573778) Calling .DriverName
	I0823 18:23:13.725624   24850 out.go:177] * Using the kvm2 driver based on existing profile
	I0823 18:23:13.726860   24850 start.go:298] selected driver: kvm2
	I0823 18:23:13.726872   24850 start.go:902] validating driver "kvm2" against &{Name:functional-573778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:functional-573778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.135 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:23:13.727006   24850 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 18:23:13.729009   24850 out.go:177] 
	W0823 18:23:13.730247   24850 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0823 18:23:13.731406   24850 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-573778 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-573778 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.216142ms)

                                                
                                                
-- stdout --
	* [functional-573778] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:23:13.928554   24910 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:23:13.928776   24910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:23:13.928788   24910 out.go:309] Setting ErrFile to fd 2...
	I0823 18:23:13.928794   24910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:23:13.929195   24910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:23:13.929918   24910 out.go:303] Setting JSON to false
	I0823 18:23:13.931135   24910 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3938,"bootTime":1692811056,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:23:13.931246   24910 start.go:138] virtualization: kvm guest
	I0823 18:23:13.933241   24910 out.go:177] * [functional-573778] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0823 18:23:13.934770   24910 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 18:23:13.934845   24910 notify.go:220] Checking for updates...
	I0823 18:23:13.937322   24910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:23:13.938746   24910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:23:13.939881   24910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:23:13.941150   24910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 18:23:13.942210   24910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 18:23:13.943852   24910 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:23:13.944545   24910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:23:13.944625   24910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:23:13.963455   24910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0823 18:23:13.963957   24910 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:23:13.964591   24910 main.go:141] libmachine: Using API Version  1
	I0823 18:23:13.964622   24910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:23:13.965039   24910 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:23:13.965264   24910 main.go:141] libmachine: (functional-573778) Calling .DriverName
	I0823 18:23:13.965511   24910 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 18:23:13.965934   24910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:23:13.965982   24910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:23:13.986195   24910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0823 18:23:13.986665   24910 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:23:13.987139   24910 main.go:141] libmachine: Using API Version  1
	I0823 18:23:13.987199   24910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:23:13.987510   24910 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:23:13.987687   24910 main.go:141] libmachine: (functional-573778) Calling .DriverName
	I0823 18:23:14.022873   24910 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0823 18:23:14.024005   24910 start.go:298] selected driver: kvm2
	I0823 18:23:14.024025   24910 start.go:902] validating driver "kvm2" against &{Name:functional-573778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:functional-573778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.135 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 18:23:14.024193   24910 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 18:23:14.026462   24910 out.go:177] 
	W0823 18:23:14.027796   24910 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0823 18:23:14.029061   24910 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-573778 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-573778 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-26vn6" [ffe563d7-d670-48b4-911c-02a70117426e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-26vn6" [ffe563d7-d670-48b4-911c-02a70117426e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.009418793s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.135:31645
functional_test.go:1674: http://192.168.50.135:31645: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-26vn6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.135:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.135:31645
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d5306bba-fb2b-444b-9d4d-1bbbf67954d8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015676492s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-573778 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-573778 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-573778 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-573778 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-573778 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c3e481a-f1bd-470e-8a7c-1e879f0936a2] Pending
helpers_test.go:344: "sp-pod" [2c3e481a-f1bd-470e-8a7c-1e879f0936a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c3e481a-f1bd-470e-8a7c-1e879f0936a2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.069058772s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-573778 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-573778 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-573778 delete -f testdata/storage-provisioner/pod.yaml: (1.027098065s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-573778 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d0151836-c579-4bbb-82c8-4c1518caa83d] Pending
helpers_test.go:344: "sp-pod" [d0151836-c579-4bbb-82c8-4c1518caa83d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d0151836-c579-4bbb-82c8-4c1518caa83d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.013196699s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-573778 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh -n functional-573778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 cp functional-573778:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1576801757/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh -n functional-573778 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-573778 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mvhpr" [801a718a-69cb-416f-99b2-df297c22af5d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mvhpr" [801a718a-69cb-416f-99b2-df297c22af5d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.049601078s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;": exit status 1 (145.402ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;": exit status 1 (251.9128ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;": exit status 1 (156.249569ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;": exit status 1 (215.309529ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-573778 exec mysql-859648c796-mvhpr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18372/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /etc/test/nested/copy/18372/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18372.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /etc/ssl/certs/18372.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18372.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /usr/share/ca-certificates/18372.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/183722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /etc/ssl/certs/183722.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/183722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /usr/share/ca-certificates/183722.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-573778 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "sudo systemctl is-active docker": exit status 1 (212.858949ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "sudo systemctl is-active crio": exit status 1 (207.498278ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573778 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.0
registry.k8s.io/kube-proxy:v1.28.0
registry.k8s.io/kube-controller-manager:v1.28.0
registry.k8s.io/kube-apiserver:v1.28.0
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-573778
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-573778
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573778 image ls --format short --alsologtostderr:
I0823 18:23:46.470089   26018 out.go:296] Setting OutFile to fd 1 ...
I0823 18:23:46.470242   26018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:46.470250   26018 out.go:309] Setting ErrFile to fd 2...
I0823 18:23:46.470255   26018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:46.470464   26018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 18:23:46.470992   26018 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:46.471085   26018 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:46.471396   26018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:46.471447   26018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:46.485989   26018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
I0823 18:23:46.486371   26018 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:46.486877   26018 main.go:141] libmachine: Using API Version  1
I0823 18:23:46.486898   26018 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:46.487255   26018 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:46.487424   26018 main.go:141] libmachine: (functional-573778) Calling .GetState
I0823 18:23:46.489137   26018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:46.489173   26018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:46.504784   26018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
I0823 18:23:46.505135   26018 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:46.505583   26018 main.go:141] libmachine: Using API Version  1
I0823 18:23:46.505606   26018 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:46.505883   26018 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:46.506074   26018 main.go:141] libmachine: (functional-573778) Calling .DriverName
I0823 18:23:46.506254   26018 ssh_runner.go:195] Run: systemctl --version
I0823 18:23:46.506276   26018 main.go:141] libmachine: (functional-573778) Calling .GetSSHHostname
I0823 18:23:46.509094   26018 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:46.509448   26018 main.go:141] libmachine: (functional-573778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:eb:fe", ip: ""} in network mk-functional-573778: {Iface:virbr1 ExpiryTime:2023-08-23 19:20:34 +0000 UTC Type:0 Mac:52:54:00:0f:eb:fe Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:functional-573778 Clientid:01:52:54:00:0f:eb:fe}
I0823 18:23:46.509488   26018 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined IP address 192.168.50.135 and MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:46.509593   26018 main.go:141] libmachine: (functional-573778) Calling .GetSSHPort
I0823 18:23:46.509779   26018 main.go:141] libmachine: (functional-573778) Calling .GetSSHKeyPath
I0823 18:23:46.509933   26018 main.go:141] libmachine: (functional-573778) Calling .GetSSHUsername
I0823 18:23:46.510078   26018 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/functional-573778/id_rsa Username:docker}
I0823 18:23:46.604852   26018 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 18:23:46.640887   26018 main.go:141] libmachine: Making call to close driver server
I0823 18:23:46.640905   26018 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:46.641208   26018 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:46.641258   26018 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:46.641271   26018 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:46.641281   26018 main.go:141] libmachine: Making call to close driver server
I0823 18:23:46.641290   26018 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:46.641468   26018 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:46.641483   26018 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573778 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:eea7b3 | 70.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.0            | sha256:bb5e0d | 34.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.0            | sha256:4be79c | 33.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.0            | sha256:ea1030 | 24.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.0            | sha256:f6f496 | 18.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| docker.io/library/minikube-local-cache-test | functional-573778  | sha256:3700cc | 1kB    |
| docker.io/library/mysql                     | 5.7                | sha256:92034f | 170MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| gcr.io/google-containers/addon-resizer      | functional-573778  | sha256:ffd4cf | 10.8MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573778 image ls --format table --alsologtostderr:
I0823 18:23:49.998484   26162 out.go:296] Setting OutFile to fd 1 ...
I0823 18:23:49.998604   26162 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:49.998612   26162 out.go:309] Setting ErrFile to fd 2...
I0823 18:23:49.998617   26162 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:49.998843   26162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 18:23:49.999374   26162 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:49.999464   26162 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:49.999896   26162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:49.999941   26162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:50.014436   26162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
I0823 18:23:50.014907   26162 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:50.015505   26162 main.go:141] libmachine: Using API Version  1
I0823 18:23:50.015522   26162 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:50.015836   26162 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:50.016010   26162 main.go:141] libmachine: (functional-573778) Calling .GetState
I0823 18:23:50.017950   26162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:50.018004   26162 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:50.032158   26162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
I0823 18:23:50.032588   26162 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:50.033048   26162 main.go:141] libmachine: Using API Version  1
I0823 18:23:50.033065   26162 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:50.033396   26162 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:50.033585   26162 main.go:141] libmachine: (functional-573778) Calling .DriverName
I0823 18:23:50.033838   26162 ssh_runner.go:195] Run: systemctl --version
I0823 18:23:50.033864   26162 main.go:141] libmachine: (functional-573778) Calling .GetSSHHostname
I0823 18:23:50.036398   26162 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:50.036772   26162 main.go:141] libmachine: (functional-573778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:eb:fe", ip: ""} in network mk-functional-573778: {Iface:virbr1 ExpiryTime:2023-08-23 19:20:34 +0000 UTC Type:0 Mac:52:54:00:0f:eb:fe Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:functional-573778 Clientid:01:52:54:00:0f:eb:fe}
I0823 18:23:50.036803   26162 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined IP address 192.168.50.135 and MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:50.036914   26162 main.go:141] libmachine: (functional-573778) Calling .GetSSHPort
I0823 18:23:50.037084   26162 main.go:141] libmachine: (functional-573778) Calling .GetSSHKeyPath
I0823 18:23:50.037247   26162 main.go:141] libmachine: (functional-573778) Calling .GetSSHUsername
I0823 18:23:50.037386   26162 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/functional-573778/id_rsa Username:docker}
I0823 18:23:50.135951   26162 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 18:23:50.173963   26162 main.go:141] libmachine: Making call to close driver server
I0823 18:23:50.173985   26162 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:50.174305   26162 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:50.174371   26162 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:50.174396   26162 main.go:141] libmachine: Making call to close driver server
I0823 18:23:50.174416   26162 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:50.174305   26162 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:50.174701   26162 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:50.174710   26162 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:50.174746   26162 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573778 image ls --format json --alsologtostderr:
[{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c"],"repoTags":["docker.io/library/nginx:latest"],"size":"70479485"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b
59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-573778"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8f051bf3b9957989f7923c9a59df626c077bfe12bf3cee609aa3785cf35c877"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.0"],"size":"33395913"},{"id":"sha256:3700ccdb0e3e3af224de8b94ebb9c8db5332f32d1890f1c8be22bd90f7154c45","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-573778"],"size":"1005"},{"id":"sha256:92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a
9800e8934a2f5828ecc8730531db8142af83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"170252563"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a","repoDigests":["registry.k8s.io/kube-proxy@sha256:9e8b2882f54a0293a933066fee9ff9f6c4335a07637b7725e375b6a2ab00e215"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.0"],"size":"24555100"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigest
s":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ba59d8e826cb75bdbe206efdbfe2cf48f0e56ea8a2ad96740a10adf8afcfec6e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.0"],"size":"34617452"},{"id":"sha256:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157","repoDi
gests":["registry.k8s.io/kube-scheduler@sha256:00db467fe4aa089cbba649fa69ec95c5ca753bf6332289cde4a37cead76c291d"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.0"],"size":"18802388"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573778 image ls --format json --alsologtostderr:
I0823 18:23:49.737657   26138 out.go:296] Setting OutFile to fd 1 ...
I0823 18:23:49.737759   26138 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:49.737766   26138 out.go:309] Setting ErrFile to fd 2...
I0823 18:23:49.737770   26138 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:49.737953   26138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 18:23:49.738472   26138 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:49.738559   26138 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:49.738873   26138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:49.738925   26138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:49.753525   26138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45013
I0823 18:23:49.754057   26138 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:49.754740   26138 main.go:141] libmachine: Using API Version  1
I0823 18:23:49.754767   26138 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:49.755225   26138 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:49.755414   26138 main.go:141] libmachine: (functional-573778) Calling .GetState
I0823 18:23:49.757616   26138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:49.757667   26138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:49.772319   26138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36329
I0823 18:23:49.772720   26138 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:49.773284   26138 main.go:141] libmachine: Using API Version  1
I0823 18:23:49.773356   26138 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:49.773756   26138 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:49.773977   26138 main.go:141] libmachine: (functional-573778) Calling .DriverName
I0823 18:23:49.774224   26138 ssh_runner.go:195] Run: systemctl --version
I0823 18:23:49.774253   26138 main.go:141] libmachine: (functional-573778) Calling .GetSSHHostname
I0823 18:23:49.777378   26138 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:49.777926   26138 main.go:141] libmachine: (functional-573778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:eb:fe", ip: ""} in network mk-functional-573778: {Iface:virbr1 ExpiryTime:2023-08-23 19:20:34 +0000 UTC Type:0 Mac:52:54:00:0f:eb:fe Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:functional-573778 Clientid:01:52:54:00:0f:eb:fe}
I0823 18:23:49.777960   26138 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined IP address 192.168.50.135 and MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:49.778151   26138 main.go:141] libmachine: (functional-573778) Calling .GetSSHPort
I0823 18:23:49.778358   26138 main.go:141] libmachine: (functional-573778) Calling .GetSSHKeyPath
I0823 18:23:49.778555   26138 main.go:141] libmachine: (functional-573778) Calling .GetSSHUsername
I0823 18:23:49.778731   26138 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/functional-573778/id_rsa Username:docker}
I0823 18:23:49.885107   26138 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 18:23:49.951406   26138 main.go:141] libmachine: Making call to close driver server
I0823 18:23:49.951424   26138 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:49.951695   26138 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:49.951714   26138 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:49.951728   26138 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:49.951737   26138 main.go:141] libmachine: Making call to close driver server
I0823 18:23:49.951751   26138 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:49.951977   26138 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:49.952000   26138 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-573778 image ls --format yaml --alsologtostderr:
- id: sha256:eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
repoTags:
- docker.io/library/nginx:latest
size: "70479485"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
repoTags:
- docker.io/library/mysql:5.7
size: "170252563"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:9e8b2882f54a0293a933066fee9ff9f6c4335a07637b7725e375b6a2ab00e215
repoTags:
- registry.k8s.io/kube-proxy:v1.28.0
size: "24555100"
- id: sha256:bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ba59d8e826cb75bdbe206efdbfe2cf48f0e56ea8a2ad96740a10adf8afcfec6e
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.0
size: "34617452"
- id: sha256:f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:00db467fe4aa089cbba649fa69ec95c5ca753bf6332289cde4a37cead76c291d
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.0
size: "18802388"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8f051bf3b9957989f7923c9a59df626c077bfe12bf3cee609aa3785cf35c877
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.0
size: "33395913"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:3700ccdb0e3e3af224de8b94ebb9c8db5332f32d1890f1c8be22bd90f7154c45
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-573778
size: "1005"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-573778
size: "10823156"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573778 image ls --format yaml --alsologtostderr:
I0823 18:23:46.685077   26041 out.go:296] Setting OutFile to fd 1 ...
I0823 18:23:46.685209   26041 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:46.685221   26041 out.go:309] Setting ErrFile to fd 2...
I0823 18:23:46.685227   26041 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:46.685418   26041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 18:23:46.685984   26041 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:46.686076   26041 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:46.686450   26041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:46.686503   26041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:46.700721   26041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
I0823 18:23:46.701172   26041 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:46.701724   26041 main.go:141] libmachine: Using API Version  1
I0823 18:23:46.701751   26041 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:46.702065   26041 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:46.702249   26041 main.go:141] libmachine: (functional-573778) Calling .GetState
I0823 18:23:46.704094   26041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:46.704139   26041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:46.718036   26041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
I0823 18:23:46.718386   26041 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:46.718854   26041 main.go:141] libmachine: Using API Version  1
I0823 18:23:46.718875   26041 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:46.719138   26041 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:46.719312   26041 main.go:141] libmachine: (functional-573778) Calling .DriverName
I0823 18:23:46.719494   26041 ssh_runner.go:195] Run: systemctl --version
I0823 18:23:46.719519   26041 main.go:141] libmachine: (functional-573778) Calling .GetSSHHostname
I0823 18:23:46.722363   26041 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:46.722781   26041 main.go:141] libmachine: (functional-573778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:eb:fe", ip: ""} in network mk-functional-573778: {Iface:virbr1 ExpiryTime:2023-08-23 19:20:34 +0000 UTC Type:0 Mac:52:54:00:0f:eb:fe Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:functional-573778 Clientid:01:52:54:00:0f:eb:fe}
I0823 18:23:46.722817   26041 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined IP address 192.168.50.135 and MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:46.722947   26041 main.go:141] libmachine: (functional-573778) Calling .GetSSHPort
I0823 18:23:46.723112   26041 main.go:141] libmachine: (functional-573778) Calling .GetSSHKeyPath
I0823 18:23:46.723282   26041 main.go:141] libmachine: (functional-573778) Calling .GetSSHUsername
I0823 18:23:46.723423   26041 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/functional-573778/id_rsa Username:docker}
I0823 18:23:46.816868   26041 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 18:23:46.858796   26041 main.go:141] libmachine: Making call to close driver server
I0823 18:23:46.858807   26041 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:46.859077   26041 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:46.859142   26041 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:46.859160   26041 main.go:141] libmachine: Making call to close driver server
I0823 18:23:46.859166   26041 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:46.859170   26041 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:46.859416   26041 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:46.859432   26041 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:46.859462   26041 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh pgrep buildkitd: exit status 1 (196.085224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image build -t localhost/my-image:functional-573778 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image build -t localhost/my-image:functional-573778 testdata/build --alsologtostderr: (4.991630947s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-573778 image build -t localhost/my-image:functional-573778 testdata/build --alsologtostderr:
I0823 18:23:47.099903   26094 out.go:296] Setting OutFile to fd 1 ...
I0823 18:23:47.100064   26094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:47.100074   26094 out.go:309] Setting ErrFile to fd 2...
I0823 18:23:47.100079   26094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 18:23:47.100261   26094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 18:23:47.100812   26094 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:47.101328   26094 config.go:182] Loaded profile config "functional-573778": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 18:23:47.101837   26094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:47.101893   26094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:47.116120   26094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
I0823 18:23:47.116662   26094 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:47.117237   26094 main.go:141] libmachine: Using API Version  1
I0823 18:23:47.117310   26094 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:47.117721   26094 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:47.117905   26094 main.go:141] libmachine: (functional-573778) Calling .GetState
I0823 18:23:47.119807   26094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 18:23:47.119857   26094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 18:23:47.133815   26094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
I0823 18:23:47.134237   26094 main.go:141] libmachine: () Calling .GetVersion
I0823 18:23:47.134743   26094 main.go:141] libmachine: Using API Version  1
I0823 18:23:47.134772   26094 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 18:23:47.135129   26094 main.go:141] libmachine: () Calling .GetMachineName
I0823 18:23:47.135323   26094 main.go:141] libmachine: (functional-573778) Calling .DriverName
I0823 18:23:47.135567   26094 ssh_runner.go:195] Run: systemctl --version
I0823 18:23:47.135594   26094 main.go:141] libmachine: (functional-573778) Calling .GetSSHHostname
I0823 18:23:47.138358   26094 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:47.138760   26094 main.go:141] libmachine: (functional-573778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:eb:fe", ip: ""} in network mk-functional-573778: {Iface:virbr1 ExpiryTime:2023-08-23 19:20:34 +0000 UTC Type:0 Mac:52:54:00:0f:eb:fe Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:functional-573778 Clientid:01:52:54:00:0f:eb:fe}
I0823 18:23:47.138792   26094 main.go:141] libmachine: (functional-573778) DBG | domain functional-573778 has defined IP address 192.168.50.135 and MAC address 52:54:00:0f:eb:fe in network mk-functional-573778
I0823 18:23:47.138958   26094 main.go:141] libmachine: (functional-573778) Calling .GetSSHPort
I0823 18:23:47.139142   26094 main.go:141] libmachine: (functional-573778) Calling .GetSSHKeyPath
I0823 18:23:47.139301   26094 main.go:141] libmachine: (functional-573778) Calling .GetSSHUsername
I0823 18:23:47.139498   26094 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/functional-573778/id_rsa Username:docker}
I0823 18:23:47.240861   26094 build_images.go:151] Building image from path: /tmp/build.1184734486.tar
I0823 18:23:47.240984   26094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0823 18:23:47.255811   26094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1184734486.tar
I0823 18:23:47.260813   26094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1184734486.tar: stat -c "%s %y" /var/lib/minikube/build/build.1184734486.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1184734486.tar': No such file or directory
I0823 18:23:47.260844   26094 ssh_runner.go:362] scp /tmp/build.1184734486.tar --> /var/lib/minikube/build/build.1184734486.tar (3072 bytes)
I0823 18:23:47.300245   26094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1184734486
I0823 18:23:47.321352   26094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1184734486 -xf /var/lib/minikube/build/build.1184734486.tar
I0823 18:23:47.332368   26094 containerd.go:378] Building image: /var/lib/minikube/build/build.1184734486
I0823 18:23:47.332434   26094 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1184734486 --local dockerfile=/var/lib/minikube/build/build.1184734486 --output type=image,name=localhost/my-image:functional-573778
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:7a010e66151b2cabdc1a56ba36cedd861e6cfd54ed85178f099de229aee32e99 0.0s done
#8 exporting config sha256:5d4b6e3637ad16170fd843ffcb9be17d2978f04c266a991f6bb5f43b77e3ae9e
#8 exporting config sha256:5d4b6e3637ad16170fd843ffcb9be17d2978f04c266a991f6bb5f43b77e3ae9e 0.0s done
#8 naming to localhost/my-image:functional-573778 done
#8 DONE 0.2s
I0823 18:23:52.018013   26094 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1184734486 --local dockerfile=/var/lib/minikube/build/build.1184734486 --output type=image,name=localhost/my-image:functional-573778: (4.685543613s)
I0823 18:23:52.018105   26094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1184734486
I0823 18:23:52.035925   26094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1184734486.tar
I0823 18:23:52.047753   26094 build_images.go:207] Built localhost/my-image:functional-573778 from /tmp/build.1184734486.tar
I0823 18:23:52.047780   26094 build_images.go:123] succeeded building to: functional-573778
I0823 18:23:52.047784   26094 build_images.go:124] failed building to: 
I0823 18:23:52.047810   26094 main.go:141] libmachine: Making call to close driver server
I0823 18:23:52.047820   26094 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:52.048093   26094 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:52.048110   26094 main.go:141] libmachine: (functional-573778) DBG | Closing plugin on server side
I0823 18:23:52.048117   26094 main.go:141] libmachine: Making call to close connection to plugin binary
I0823 18:23:52.048134   26094 main.go:141] libmachine: Making call to close driver server
I0823 18:23:52.048144   26094 main.go:141] libmachine: (functional-573778) Calling .Close
I0823 18:23:52.048420   26094 main.go:141] libmachine: Successfully made call to close driver server
I0823 18:23:52.048439   26094 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
2023/08/23 18:23:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.734507521s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-573778
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-573778 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-573778 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-jt9hj" [20d13116-2010-41fb-a747-99e81ec6d507] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-jt9hj" [20d13116-2010-41fb-a747-99e81ec6d507] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.025680484s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr: (3.774212326s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr: (3.932280496s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service list -o json
functional_test.go:1493: Took "271.620957ms" to run "out/minikube-linux-amd64 -p functional-573778 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.135:30137
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.584531904s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-573778
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image load --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr: (4.201253842s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.135:30137
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "222.253822ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "49.03823ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "266.459844ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "42.003624ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdany-port1817541822/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692814993385008549" to /tmp/TestFunctionalparallelMountCmdany-port1817541822/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692814993385008549" to /tmp/TestFunctionalparallelMountCmdany-port1817541822/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692814993385008549" to /tmp/TestFunctionalparallelMountCmdany-port1817541822/001/test-1692814993385008549
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.091764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 23 18:23 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 23 18:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 23 18:23 test-1692814993385008549
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh cat /mount-9p/test-1692814993385008549
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-573778 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [946a99a0-ea4e-4dc5-aa1b-41ccc4d34464] Pending
helpers_test.go:344: "busybox-mount" [946a99a0-ea4e-4dc5-aa1b-41ccc4d34464] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [946a99a0-ea4e-4dc5-aa1b-41ccc4d34464] Running
helpers_test.go:344: "busybox-mount" [946a99a0-ea4e-4dc5-aa1b-41ccc4d34464] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [946a99a0-ea4e-4dc5-aa1b-41ccc4d34464] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.029542866s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-573778 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdany-port1817541822/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image save gcr.io/google-containers/addon-resizer:functional-573778 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image save gcr.io/google-containers/addon-resizer:functional-573778 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.203030329s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image rm gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.20933028s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-573778
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 image save --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-573778 image save --daemon gcr.io/google-containers/addon-resizer:functional-573778 --alsologtostderr: (1.29908489s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-573778
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdspecific-port1049700070/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.058151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdspecific-port1049700070/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "sudo umount -f /mount-9p": exit status 1 (197.953329ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-573778 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdspecific-port1049700070/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T" /mount1: exit status 1 (229.894361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-573778 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-573778 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-573778 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3364264298/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-573778
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-573778
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-573778
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (98.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-594467 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0823 18:24:09.715572   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-594467 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m38.083011381s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (98.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons enable ingress --alsologtostderr -v=5: (17.025354264s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-594467 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-594467 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.943474787s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-594467 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-594467 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [786e473b-b3a3-4d2d-90a9-b62e6038dfa4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [786e473b-b3a3-4d2d-90a9-b62e6038dfa4] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.010701036s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-594467 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.75
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons disable ingress-dns --alsologtostderr -v=1: (2.181261401s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons disable ingress --alsologtostderr -v=1
E0823 18:26:25.869723   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-594467 addons disable ingress --alsologtostderr -v=1: (7.523222858s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-884664 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0823 18:26:53.555784   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:28:00.844223   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:00.849479   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:00.859753   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:00.880068   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:00.920354   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:01.000673   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:01.161104   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:01.481604   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:02.122518   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:03.402968   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:05.963505   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-884664 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m39.549667444s)
--- PASS: TestJSONOutput/start/Command (99.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-884664 --output=json --user=testUser
E0823 18:28:11.084323   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-884664 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-884664 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-884664 --output=json --user=testUser: (7.083980177s)
--- PASS: TestJSONOutput/stop/Command (7.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-162028 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-162028 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.961156ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5422968-c311-43b7-9646-451cef51323c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-162028] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"db10db72-e074-42f9-b601-47ae9ab878b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17086"}}
	{"specversion":"1.0","id":"1e82e3fd-8c27-47eb-a362-e4e86769af5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b222697f-ffde-4625-ad93-c66e9b7ab061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig"}}
	{"specversion":"1.0","id":"427be5eb-f8cd-4a40-8a56-9d673360b46c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube"}}
	{"specversion":"1.0","id":"5aa8b7e2-5f0e-4dd6-8445-96352b1d9bb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"533d800d-c9ab-407b-8c19-79ab59511f3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12e28e06-ef9b-4928-90b1-44ef5b9669bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-162028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-162028
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (101.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-847787 --driver=kvm2  --container-runtime=containerd
E0823 18:28:21.325385   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:28:41.805611   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-847787 --driver=kvm2  --container-runtime=containerd: (47.135486967s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-851218 --driver=kvm2  --container-runtime=containerd
E0823 18:29:22.766861   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-851218 --driver=kvm2  --container-runtime=containerd: (51.069984083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-847787
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-851218
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-851218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-851218
helpers_test.go:175: Cleaning up "first-847787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-847787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-847787: (1.012451949s)
--- PASS: TestMinikubeProfile (101.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-239562 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-239562 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.036572109s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-239562 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-239562 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-255660 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0823 18:30:44.688196   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:30:53.325698   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.330945   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.341183   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.361477   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.401755   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.482071   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.642447   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:53.963016   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:54.603911   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:30:55.884510   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-255660 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.74224096s)
E0823 18:30:58.444897   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (29.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-239562 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-255660
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-255660: (1.187950228s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-255660
E0823 18:31:03.565701   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:31:13.806746   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-255660: (23.584819219s)
E0823 18:31:25.869190   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-255660 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (189.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-309083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0823 18:31:34.287200   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:32:15.248140   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:33:00.844506   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:33:28.529505   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:33:37.169579   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-309083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m9.404487795s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (189.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-309083 -- rollout status deployment/busybox: (4.335138629s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-d76w2 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-npptd -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-d76w2 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-npptd -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-d76w2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-npptd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-d76w2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-d76w2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-npptd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-309083 -- exec busybox-5bc68d56bd-npptd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-309083 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-309083 -v 3 --alsologtostderr: (45.463371908s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.04s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp testdata/cp-test.txt multinode-309083:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2634868528/001/cp-test_multinode-309083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083:/home/docker/cp-test.txt multinode-309083-m02:/home/docker/cp-test_multinode-309083_multinode-309083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test_multinode-309083_multinode-309083-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083:/home/docker/cp-test.txt multinode-309083-m03:/home/docker/cp-test_multinode-309083_multinode-309083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test_multinode-309083_multinode-309083-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp testdata/cp-test.txt multinode-309083-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2634868528/001/cp-test_multinode-309083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m02:/home/docker/cp-test.txt multinode-309083:/home/docker/cp-test_multinode-309083-m02_multinode-309083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test_multinode-309083-m02_multinode-309083.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m02:/home/docker/cp-test.txt multinode-309083-m03:/home/docker/cp-test_multinode-309083-m02_multinode-309083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test_multinode-309083-m02_multinode-309083-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp testdata/cp-test.txt multinode-309083-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2634868528/001/cp-test_multinode-309083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m03:/home/docker/cp-test.txt multinode-309083:/home/docker/cp-test_multinode-309083-m03_multinode-309083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083 "sudo cat /home/docker/cp-test_multinode-309083-m03_multinode-309083.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 cp multinode-309083-m03:/home/docker/cp-test.txt multinode-309083-m02:/home/docker/cp-test_multinode-309083-m03_multinode-309083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 ssh -n multinode-309083-m02 "sudo cat /home/docker/cp-test_multinode-309083-m03_multinode-309083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-309083 node stop m03: (1.334910706s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-309083 status: exit status 7 (436.649919ms)

                                                
                                                
-- stdout --
	multinode-309083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr: exit status 7 (442.201402ms)

                                                
                                                
-- stdout --
	multinode-309083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:35:39.628242   32840 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:35:39.628361   32840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:35:39.628370   32840 out.go:309] Setting ErrFile to fd 2...
	I0823 18:35:39.628375   32840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:35:39.628583   32840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:35:39.628768   32840 out.go:303] Setting JSON to false
	I0823 18:35:39.628806   32840 mustload.go:65] Loading cluster: multinode-309083
	I0823 18:35:39.628890   32840 notify.go:220] Checking for updates...
	I0823 18:35:39.629219   32840 config.go:182] Loaded profile config "multinode-309083": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:35:39.629233   32840 status.go:255] checking status of multinode-309083 ...
	I0823 18:35:39.629651   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.629717   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.645171   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I0823 18:35:39.645617   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.646154   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.646179   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.646593   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.646775   32840 main.go:141] libmachine: (multinode-309083) Calling .GetState
	I0823 18:35:39.648422   32840 status.go:330] multinode-309083 host status = "Running" (err=<nil>)
	I0823 18:35:39.648434   32840 host.go:66] Checking if "multinode-309083" exists ...
	I0823 18:35:39.648732   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.648773   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.663077   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0823 18:35:39.663523   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.663964   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.663985   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.664289   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.664476   32840 main.go:141] libmachine: (multinode-309083) Calling .GetIP
	I0823 18:35:39.667066   32840 main.go:141] libmachine: (multinode-309083) DBG | domain multinode-309083 has defined MAC address 52:54:00:40:8b:a6 in network mk-multinode-309083
	I0823 18:35:39.667443   32840 main.go:141] libmachine: (multinode-309083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:8b:a6", ip: ""} in network mk-multinode-309083: {Iface:virbr1 ExpiryTime:2023-08-23 19:31:43 +0000 UTC Type:0 Mac:52:54:00:40:8b:a6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-309083 Clientid:01:52:54:00:40:8b:a6}
	I0823 18:35:39.667478   32840 main.go:141] libmachine: (multinode-309083) DBG | domain multinode-309083 has defined IP address 192.168.39.49 and MAC address 52:54:00:40:8b:a6 in network mk-multinode-309083
	I0823 18:35:39.667565   32840 host.go:66] Checking if "multinode-309083" exists ...
	I0823 18:35:39.667851   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.667889   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.682081   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0823 18:35:39.682440   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.682927   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.682949   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.683258   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.683460   32840 main.go:141] libmachine: (multinode-309083) Calling .DriverName
	I0823 18:35:39.683641   32840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0823 18:35:39.683667   32840 main.go:141] libmachine: (multinode-309083) Calling .GetSSHHostname
	I0823 18:35:39.686381   32840 main.go:141] libmachine: (multinode-309083) DBG | domain multinode-309083 has defined MAC address 52:54:00:40:8b:a6 in network mk-multinode-309083
	I0823 18:35:39.686792   32840 main.go:141] libmachine: (multinode-309083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:8b:a6", ip: ""} in network mk-multinode-309083: {Iface:virbr1 ExpiryTime:2023-08-23 19:31:43 +0000 UTC Type:0 Mac:52:54:00:40:8b:a6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-309083 Clientid:01:52:54:00:40:8b:a6}
	I0823 18:35:39.686829   32840 main.go:141] libmachine: (multinode-309083) DBG | domain multinode-309083 has defined IP address 192.168.39.49 and MAC address 52:54:00:40:8b:a6 in network mk-multinode-309083
	I0823 18:35:39.686939   32840 main.go:141] libmachine: (multinode-309083) Calling .GetSSHPort
	I0823 18:35:39.687113   32840 main.go:141] libmachine: (multinode-309083) Calling .GetSSHKeyPath
	I0823 18:35:39.687281   32840 main.go:141] libmachine: (multinode-309083) Calling .GetSSHUsername
	I0823 18:35:39.687429   32840 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/multinode-309083/id_rsa Username:docker}
	I0823 18:35:39.785214   32840 ssh_runner.go:195] Run: systemctl --version
	I0823 18:35:39.790912   32840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 18:35:39.807123   32840 kubeconfig.go:92] found "multinode-309083" server: "https://192.168.39.49:8443"
	I0823 18:35:39.807146   32840 api_server.go:166] Checking apiserver status ...
	I0823 18:35:39.807177   32840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 18:35:39.821060   32840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup
	I0823 18:35:39.831336   32840 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/podd4c35530220969b953b9bea091af0494/2ff9e1ce545850ef9d3340cde7a7d74a8eabaf6e47d19745e5c73a18df242209"
	I0823 18:35:39.831416   32840 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4c35530220969b953b9bea091af0494/2ff9e1ce545850ef9d3340cde7a7d74a8eabaf6e47d19745e5c73a18df242209/freezer.state
	I0823 18:35:39.842760   32840 api_server.go:204] freezer state: "THAWED"
	I0823 18:35:39.842788   32840 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0823 18:35:39.847729   32840 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0823 18:35:39.847754   32840 status.go:421] multinode-309083 apiserver status = Running (err=<nil>)
	I0823 18:35:39.847763   32840 status.go:257] multinode-309083 status: &{Name:multinode-309083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0823 18:35:39.847777   32840 status.go:255] checking status of multinode-309083-m02 ...
	I0823 18:35:39.848068   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.848094   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.864146   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45341
	I0823 18:35:39.864570   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.865079   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.865106   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.865435   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.865624   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetState
	I0823 18:35:39.867016   32840 status.go:330] multinode-309083-m02 host status = "Running" (err=<nil>)
	I0823 18:35:39.867033   32840 host.go:66] Checking if "multinode-309083-m02" exists ...
	I0823 18:35:39.867298   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.867345   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.882247   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I0823 18:35:39.882611   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.883049   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.883068   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.883381   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.883548   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetIP
	I0823 18:35:39.886267   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | domain multinode-309083-m02 has defined MAC address 52:54:00:00:bf:0f in network mk-multinode-309083
	I0823 18:35:39.886660   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bf:0f", ip: ""} in network mk-multinode-309083: {Iface:virbr1 ExpiryTime:2023-08-23 19:32:53 +0000 UTC Type:0 Mac:52:54:00:00:bf:0f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:multinode-309083-m02 Clientid:01:52:54:00:00:bf:0f}
	I0823 18:35:39.886688   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | domain multinode-309083-m02 has defined IP address 192.168.39.131 and MAC address 52:54:00:00:bf:0f in network mk-multinode-309083
	I0823 18:35:39.886842   32840 host.go:66] Checking if "multinode-309083-m02" exists ...
	I0823 18:35:39.887123   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:39.887153   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:39.901521   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34361
	I0823 18:35:39.901916   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:39.902355   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:39.902380   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:39.902683   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:39.902843   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .DriverName
	I0823 18:35:39.903017   32840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0823 18:35:39.903041   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetSSHHostname
	I0823 18:35:39.905551   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | domain multinode-309083-m02 has defined MAC address 52:54:00:00:bf:0f in network mk-multinode-309083
	I0823 18:35:39.905967   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bf:0f", ip: ""} in network mk-multinode-309083: {Iface:virbr1 ExpiryTime:2023-08-23 19:32:53 +0000 UTC Type:0 Mac:52:54:00:00:bf:0f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:multinode-309083-m02 Clientid:01:52:54:00:00:bf:0f}
	I0823 18:35:39.906007   32840 main.go:141] libmachine: (multinode-309083-m02) DBG | domain multinode-309083-m02 has defined IP address 192.168.39.131 and MAC address 52:54:00:00:bf:0f in network mk-multinode-309083
	I0823 18:35:39.906111   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetSSHPort
	I0823 18:35:39.906264   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetSSHKeyPath
	I0823 18:35:39.906405   32840 main.go:141] libmachine: (multinode-309083-m02) Calling .GetSSHUsername
	I0823 18:35:39.906531   32840 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/multinode-309083-m02/id_rsa Username:docker}
	I0823 18:35:40.000731   32840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 18:35:40.013718   32840 status.go:257] multinode-309083-m02 status: &{Name:multinode-309083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0823 18:35:40.013757   32840 status.go:255] checking status of multinode-309083-m03 ...
	I0823 18:35:40.014143   32840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:35:40.014176   32840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:35:40.029182   32840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0823 18:35:40.029536   32840 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:35:40.030125   32840 main.go:141] libmachine: Using API Version  1
	I0823 18:35:40.030152   32840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:35:40.030543   32840 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:35:40.030718   32840 main.go:141] libmachine: (multinode-309083-m03) Calling .GetState
	I0823 18:35:40.032426   32840 status.go:330] multinode-309083-m03 host status = "Stopped" (err=<nil>)
	I0823 18:35:40.032451   32840 status.go:343] host is not running, skipping remaining checks
	I0823 18:35:40.032458   32840 status.go:257] multinode-309083-m03 status: &{Name:multinode-309083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 node start m03 --alsologtostderr
E0823 18:35:53.325779   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-309083 node start m03 --alsologtostderr: (26.685035008s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-309083
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-309083
E0823 18:36:21.010735   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:36:25.869753   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:37:48.917058   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:38:00.844289   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-309083: (3m4.83397422s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-309083 --wait=true -v=8 --alsologtostderr
E0823 18:40:53.325451   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-309083 --wait=true -v=8 --alsologtostderr: (2m7.53774076s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-309083
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-309083 node delete m03: (1.195934883s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 stop
E0823 18:41:25.870044   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 18:43:00.844513   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 18:44:23.889718   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-309083 stop: (3m3.574795782s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-309083 status: exit status 7 (73.509554ms)

                                                
                                                
-- stdout --
	multinode-309083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr: exit status 7 (80.023856ms)

                                                
                                                
-- stdout --
	multinode-309083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:44:25.227542   34986 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:44:25.227695   34986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:44:25.227703   34986 out.go:309] Setting ErrFile to fd 2...
	I0823 18:44:25.227707   34986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:44:25.227907   34986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:44:25.228051   34986 out.go:303] Setting JSON to false
	I0823 18:44:25.228079   34986 mustload.go:65] Loading cluster: multinode-309083
	I0823 18:44:25.228127   34986 notify.go:220] Checking for updates...
	I0823 18:44:25.228426   34986 config.go:182] Loaded profile config "multinode-309083": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:44:25.228440   34986 status.go:255] checking status of multinode-309083 ...
	I0823 18:44:25.228774   34986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:44:25.228814   34986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:44:25.248649   34986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0823 18:44:25.249047   34986 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:44:25.249613   34986 main.go:141] libmachine: Using API Version  1
	I0823 18:44:25.249636   34986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:44:25.249914   34986 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:44:25.250087   34986 main.go:141] libmachine: (multinode-309083) Calling .GetState
	I0823 18:44:25.251647   34986 status.go:330] multinode-309083 host status = "Stopped" (err=<nil>)
	I0823 18:44:25.251660   34986 status.go:343] host is not running, skipping remaining checks
	I0823 18:44:25.251665   34986 status.go:257] multinode-309083 status: &{Name:multinode-309083 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0823 18:44:25.251701   34986 status.go:255] checking status of multinode-309083-m02 ...
	I0823 18:44:25.251989   34986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0823 18:44:25.252028   34986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0823 18:44:25.266009   34986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0823 18:44:25.266353   34986 main.go:141] libmachine: () Calling .GetVersion
	I0823 18:44:25.266784   34986 main.go:141] libmachine: Using API Version  1
	I0823 18:44:25.266803   34986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0823 18:44:25.267163   34986 main.go:141] libmachine: () Calling .GetMachineName
	I0823 18:44:25.267351   34986 main.go:141] libmachine: (multinode-309083-m02) Calling .GetState
	I0823 18:44:25.268784   34986 status.go:330] multinode-309083-m02 host status = "Stopped" (err=<nil>)
	I0823 18:44:25.268795   34986 status.go:343] host is not running, skipping remaining checks
	I0823 18:44:25.268801   34986 status.go:257] multinode-309083-m02 status: &{Name:multinode-309083-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-309083 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0823 18:45:53.325398   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-309083 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m29.344323518s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-309083 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-309083
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-309083-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-309083-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (57.1927ms)

                                                
                                                
-- stdout --
	* [multinode-309083-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-309083-m02' is duplicated with machine name 'multinode-309083-m02' in profile 'multinode-309083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-309083-m03 --driver=kvm2  --container-runtime=containerd
E0823 18:46:25.871048   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-309083-m03 --driver=kvm2  --container-runtime=containerd: (49.043023764s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-309083
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-309083: exit status 80 (216.239952ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-309083
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-309083-m03 already exists in multinode-309083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-309083-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.31s)

                                                
                                    
x
+
TestPreload (336.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-721906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0823 18:47:16.371464   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 18:48:00.845002   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-721906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m37.616772825s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-721906 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-721906 image pull gcr.io/k8s-minikube/busybox: (2.90876723s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-721906
E0823 18:50:53.326104   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-721906: (1m31.910918145s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-721906 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0823 18:51:25.871146   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-721906 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m23.243858746s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-721906 image list
helpers_test.go:175: Cleaning up "test-preload-721906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-721906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-721906: (1.014074438s)
--- PASS: TestPreload (336.93s)

                                                
                                    
x
+
TestScheduledStopUnix (117.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-773417 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0823 18:53:00.844222   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-773417 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.39455733s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773417 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-773417 -n scheduled-stop-773417
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773417 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773417 -n scheduled-stop-773417
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-773417
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-773417
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-773417: exit status 7 (59.758366ms)

                                                
                                                
-- stdout --
	scheduled-stop-773417
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773417 -n scheduled-stop-773417
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773417 -n scheduled-stop-773417: exit status 7 (56.242165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-773417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-773417
--- PASS: TestScheduledStopUnix (117.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (210.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.902952328s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-332106
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-332106: (12.100935858s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-332106 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-332106 status --format={{.Host}}: exit status 7 (65.991651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m36.369607924s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-332106 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (93.987092ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-332106] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-332106
	    minikube start -p kubernetes-upgrade-332106 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3321062 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-332106 --kubernetes-version=v1.28.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-332106 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (32.305072809s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-332106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-332106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-332106: (1.192038546s)
--- PASS: TestKubernetesUpgrade (210.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.28s)

                                                
                                    
x
+
TestPause/serial/Start (156.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-192565 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-192565 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m36.260905595s)
--- PASS: TestPause/serial/Start (156.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-192565 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-192565 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.409238501s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-192565 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-192565 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-192565 --output=json --layout=cluster: exit status 2 (244.948917ms)

                                                
                                                
-- stdout --
	{"Name":"pause-192565","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-192565","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-192565 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-192565 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-192565 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (11.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (11.343917567s)
--- PASS: TestPause/serial/VerifyDeletedResources (11.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (72.040745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-767093] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767093 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767093 --driver=kvm2  --container-runtime=containerd: (50.605812272s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-767093 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-573325 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-573325 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (100.086534ms)

                                                
                                                
-- stdout --
	* [false-573325] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17086
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 18:57:26.937676   41358 out.go:296] Setting OutFile to fd 1 ...
	I0823 18:57:26.937818   41358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:57:26.937828   41358 out.go:309] Setting ErrFile to fd 2...
	I0823 18:57:26.937832   41358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 18:57:26.938021   41358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
	I0823 18:57:26.938678   41358 out.go:303] Setting JSON to false
	I0823 18:57:26.939935   41358 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5991,"bootTime":1692811056,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0823 18:57:26.939999   41358 start.go:138] virtualization: kvm guest
	I0823 18:57:26.942170   41358 out.go:177] * [false-573325] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0823 18:57:26.943649   41358 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 18:57:26.943654   41358 notify.go:220] Checking for updates...
	I0823 18:57:26.945065   41358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 18:57:26.946335   41358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
	I0823 18:57:26.947710   41358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
	I0823 18:57:26.949018   41358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0823 18:57:26.950763   41358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 18:57:26.952424   41358 config.go:182] Loaded profile config "NoKubernetes-767093": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:57:26.952522   41358 config.go:182] Loaded profile config "kubernetes-upgrade-332106": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I0823 18:57:26.952597   41358 config.go:182] Loaded profile config "stopped-upgrade-228249": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0823 18:57:26.952661   41358 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 18:57:26.991041   41358 out.go:177] * Using the kvm2 driver based on user configuration
	I0823 18:57:26.992232   41358 start.go:298] selected driver: kvm2
	I0823 18:57:26.992247   41358 start.go:902] validating driver "kvm2" against <nil>
	I0823 18:57:26.992259   41358 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 18:57:26.994255   41358 out.go:177] 
	W0823 18:57:26.995428   41358 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0823 18:57:26.996805   41358 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-573325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.39.90:8443
name: kubernetes-upgrade-332106
contexts:
- context:
cluster: kubernetes-upgrade-332106
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-332106
name: kubernetes-upgrade-332106
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-332106
user:
client-certificate: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.crt
client-key: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-573325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-573325"

                                                
                                                
----------------------- debugLogs end: false-573325 [took: 2.969575138s] --------------------------------
helpers_test.go:175: Cleaning up "false-573325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-573325
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m5.619304719s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-767093 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-767093 status -o json: exit status 2 (227.895127ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-767093","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-767093
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-767093: (1.022370653s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767093 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (33.87925449s)
--- PASS: TestNoKubernetes/serial/Start (33.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-767093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-767093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.530607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.062123171s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-767093
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-767093: (1.22995318s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767093 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767093 --driver=kvm2  --container-runtime=containerd: (26.82842776s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-767093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-767093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.954385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0823 19:00:53.325716   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 19:01:03.890293   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 19:01:25.869728   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m29.051201544s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m12.144611403s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fgwvl" [6e3e4c91-4788-4a81-818f-cf02f2a62045] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fgwvl" [6e3e4c91-4788-4a81-818f-cf02f2a62045] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.017052882s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m35.933642706s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5vw5d" [045b785d-5d55-4d71-9914-0c2a78ffaf17] Running
E0823 19:03:00.842568   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024346732s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f8hhg" [9e6c807c-979e-4a75-a2ac-c4656ff2e57f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f8hhg" [9e6c807c-979e-4a75-a2ac-c4656ff2e57f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.013913546s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0823 19:03:56.371919   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m29.152143796s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-b27z6" [77c0f42b-fa40-4d29-b030-7197510c6a5f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028909754s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kwqj8" [de57aacb-814e-4840-b2ab-a352e1afe95a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kwqj8" [de57aacb-814e-4840-b2ab-a352e1afe95a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.011802217s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m44.726207027s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2dnb7" [79be02bf-4576-4e04-bc43-cb47e80d3d5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2dnb7" [79be02bf-4576-4e04-bc43-cb47e80d3d5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.012262002s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0823 19:05:53.325847   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
E0823 19:06:25.869970   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m27.34221518s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6bcpn" [bb1d484f-ee1a-46bb-a63f-28e067942826] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6bcpn" [bb1d484f-ee1a-46bb-a63f-28e067942826] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.01432159s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nww77" [9b8ec19f-2c98-4c20-9897-0d4ed0a76c01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021769203s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-573325 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m42.563173013s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mvlf8" [71dcd820-1681-4c5a-a8a6-124fc53888fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mvlf8" [71dcd820-1681-4c5a-a8a6-124fc53888fc] Running
E0823 19:07:06.833580   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:06.838884   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:06.849141   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:06.869508   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:06.909823   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:06.990171   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:07.150604   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:07.471211   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:08.111520   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:09.392343   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.013510726s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-355473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0823 19:07:27.314702   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:47.795091   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:07:55.834641   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:55.839944   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:55.850260   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:55.870611   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:55.910953   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:55.991277   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:56.151671   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:56.472278   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:57.112622   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:07:58.392863   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:08:00.842682   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 19:08:00.953913   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:08:06.074195   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:08:16.314373   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:08:28.755696   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:08:36.794850   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-355473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m16.605789154s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-573325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-573325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t5rnm" [9e1cf016-0bc3-4f89-96fe-a4a685f10529] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t5rnm" [9e1cf016-0bc3-4f89-96fe-a4a685f10529] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011266611s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-573325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-573325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0823 19:19:05.134338   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:09:10.683519   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:10.688787   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:10.699046   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:10.719347   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:10.759651   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:10.839981   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:11.000135   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:11.320836   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:11.961174   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:13.241691   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:15.802045   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:17.755430   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:09:20.922962   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:09:31.163980   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m58.307924634s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-355473 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f4565ee-6228-4190-9b1b-826a9a682efa] Pending
helpers_test.go:344: "busybox" [0f4565ee-6228-4190-9b1b-826a9a682efa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f4565ee-6228-4190-9b1b-826a9a682efa] Running
E0823 19:09:50.676893   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:09:51.645004   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.032195311s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-355473 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-355473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-355473 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (102.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-355473 --alsologtostderr -v=3
E0823 19:09:58.450157   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.455430   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.465735   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.486030   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.526451   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.606737   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:58.767232   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:59.087860   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:09:59.728511   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:01.008679   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:03.569153   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:08.689972   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:18.930182   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:32.605905   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:10:39.410850   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:10:39.676151   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:10:53.326110   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-355473 --alsologtostderr -v=3: (1m42.412608613s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (102.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301101 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93998760-853a-43eb-8f57-273560710af3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93998760-853a-43eb-8f57-273560710af3] Running
E0823 19:11:08.919389   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.034859417s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301101 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080644811s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-301101 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-301101 --alsologtostderr -v=3
E0823 19:11:20.371069   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-301101 --alsologtostderr -v=3: (1m32.361018786s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-228249
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-319240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:11:28.510691   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.515956   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.526214   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.546466   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.586753   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.666978   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:28.827947   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:29.148222   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:29.788377   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:31.069233   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:33.630104   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-319240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m40.573143293s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-355473 -n old-k8s-version-355473
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-355473 -n old-k8s-version-355473: exit status 7 (67.495989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-355473 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (450.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-355473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0823 19:11:38.751222   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:48.992317   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:11:53.971406   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:53.976688   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:53.986928   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:54.007052   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:54.048078   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:54.128816   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:54.289432   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:54.526782   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:11:54.610129   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:55.250785   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:56.531979   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:11:59.092757   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:12:04.213074   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:12:06.833449   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:12:09.472847   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:12:14.453650   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:12:34.518068   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
E0823 19:12:34.934695   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:12:42.291585   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-355473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m30.171298577s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-355473 -n old-k8s-version-355473
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (450.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301101 -n no-preload-301101
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301101 -n no-preload-301101: exit status 7 (57.107091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-301101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:12:50.433148   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:12:55.834738   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:13:00.842137   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (5m34.047598425s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301101 -n no-preload-301101
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319240 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b15577ec-eba0-4190-8ffd-cb27de7ca897] Pending
helpers_test.go:344: "busybox" [b15577ec-eba0-4190-8ffd-cb27de7ca897] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b15577ec-eba0-4190-8ffd-cb27de7ca897] Running
E0823 19:13:15.895225   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.039076918s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-319240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-319240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-319240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084859421s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-319240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-319240 --alsologtostderr -v=3
E0823 19:13:23.516599   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:13:37.451948   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.457244   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.467514   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.487781   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.528255   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.608560   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:37.768888   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:38.089512   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:38.730094   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:40.010237   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:42.570912   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:47.691427   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:13:57.932655   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:14:10.682560   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
E0823 19:14:12.353673   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:14:18.412963   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:14:37.816167   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:14:38.367191   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-319240 --alsologtostderr -v=3: (1m32.205351233s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240: exit status 7 (61.64305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-319240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-319240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-319240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (5m29.325342469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (78.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-895446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:14:59.373746   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
E0823 19:15:26.132028   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
E0823 19:15:53.325814   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-895446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m18.781032152s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (78.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-895446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-895446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.363134391s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-895446 --alsologtostderr -v=3
E0823 19:16:21.293988   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-895446 --alsologtostderr -v=3: (2.094465926s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895446 -n newest-cni-895446
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895446 -n newest-cni-895446: exit status 7 (60.846747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-895446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-895446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:16:25.870024   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/addons-789637/client.crt: no such file or directory
E0823 19:16:28.511469   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:16:53.972223   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:16:56.194145   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/enable-default-cni-573325/client.crt: no such file or directory
E0823 19:17:06.834220   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/auto-573325/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-895446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.071427011s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-895446 -n newest-cni-895446
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-895446 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-895446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895446 -n newest-cni-895446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895446 -n newest-cni-895446: exit status 2 (247.911897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895446 -n newest-cni-895446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895446 -n newest-cni-895446: exit status 2 (245.321368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-895446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-895446 -n newest-cni-895446
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-895446 -n newest-cni-895446
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:17:21.657140   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/flannel-573325/client.crt: no such file or directory
E0823 19:17:43.891110   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
E0823 19:17:55.835187   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kindnet-573325/client.crt: no such file or directory
E0823 19:18:00.842306   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/functional-573778/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m4.371515077s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845804 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [020a96a8-585d-4cc9-9988-05d1fe0af725] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [020a96a8-585d-4cc9-9988-05d1fe0af725] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.043754951s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkh4x" [e56f4498-c333-40ea-b1b8-30d594589ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkh4x" [e56f4498-c333-40ea-b1b8-30d594589ae0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.02043454s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-845804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-845804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.147100583s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-845804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-845804 --alsologtostderr -v=3
E0823 19:18:37.451827   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/bridge-573325/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-845804 --alsologtostderr -v=3: (1m31.740249269s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tkh4x" [e56f4498-c333-40ea-b1b8-30d594589ae0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012224541s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-301101 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-301101 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-301101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301101 -n no-preload-301101
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301101 -n no-preload-301101: exit status 2 (238.359584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301101 -n no-preload-301101
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301101 -n no-preload-301101: exit status 2 (241.219805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-301101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301101 -n no-preload-301101
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301101 -n no-preload-301101
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6qdn5" [ca26d535-1787-4f12-9fde-04d8f992a608] Running
E0823 19:19:10.682638   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01726402s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6qdn5" [ca26d535-1787-4f12-9fde-04d8f992a608] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010882763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-355473 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-355473 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-355473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-355473 -n old-k8s-version-355473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-355473 -n old-k8s-version-355473: exit status 2 (232.321465ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-355473 -n old-k8s-version-355473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-355473 -n old-k8s-version-355473: exit status 2 (233.605204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-355473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-355473 -n old-k8s-version-355473
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-355473 -n old-k8s-version-355473
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845804 -n embed-certs-845804
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845804 -n embed-certs-845804: exit status 7 (57.258223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (328.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E0823 19:20:03.918812   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/old-k8s-version-355473/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (5m28.645583914s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845804 -n embed-certs-845804
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (328.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gjpbs" [4111f60d-f3be-4115-87de-36f4a18051e6] Running
E0823 19:20:24.399240   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/old-k8s-version-355473/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020061502s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gjpbs" [4111f60d-f3be-4115-87de-36f4a18051e6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012932007s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-319240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-319240 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-319240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240: exit status 2 (237.813606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240: exit status 2 (235.156037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-319240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-319240 -n default-k8s-diff-port-319240
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xzsn" [bba03344-3da2-4e33-9d40-590d80605124] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0823 19:25:33.727913   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/calico-573325/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xzsn" [bba03344-3da2-4e33-9d40-590d80605124] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.01865865s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xzsn" [bba03344-3da2-4e33-9d40-590d80605124] Running
E0823 19:25:52.700628   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/default-k8s-diff-port-319240/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012450078s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-845804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-845804 "sudo crictl images -o json"
E0823 19:25:53.326371   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/ingress-addon-legacy-594467/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-845804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845804 -n embed-certs-845804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845804 -n embed-certs-845804: exit status 2 (234.735752ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-845804 -n embed-certs-845804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-845804 -n embed-certs-845804: exit status 2 (234.908704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-845804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845804 -n embed-certs-845804
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-845804 -n embed-certs-845804
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    

Test skip (36/302)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.0/cached-images 0
13 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.28.0/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
241 TestNetworkPlugins/group/kubenet 2.86
249 TestNetworkPlugins/group/cilium 3.27
262 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-573325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.39.90:8443
name: kubernetes-upgrade-332106
contexts:
- context:
cluster: kubernetes-upgrade-332106
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-332106
name: kubernetes-upgrade-332106
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-332106
user:
client-certificate: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.crt
client-key: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-573325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-573325"

                                                
                                                
----------------------- debugLogs end: kubenet-573325 [took: 2.716650189s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-573325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-573325
--- SKIP: TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-573325 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-573325" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.39.90:8443
name: kubernetes-upgrade-332106
contexts:
- context:
cluster: kubernetes-upgrade-332106
extensions:
- extension:
last-update: Wed, 23 Aug 2023 18:57:16 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-332106
name: kubernetes-upgrade-332106
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-332106
user:
client-certificate: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.crt
client-key: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/kubernetes-upgrade-332106/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-573325

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-573325" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-573325"

                                                
                                                
----------------------- debugLogs end: cilium-573325 [took: 3.133526947s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-573325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-573325
--- SKIP: TestNetworkPlugins/group/cilium (3.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-691500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-691500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard