Test Report: KVM_Linux_containerd 15985

                    
                      49d57361cbdf0d306690482a173cc4589bc1e918:2023-03-07:28216
                    
                

Test fail (1/297)

Order failed test Duration
210 TestPreload 1036.22
x
+
TestPreload (1036.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0307 18:45:08.839077   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:45:25.776014   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m2.314138505s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-203208 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-203208 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.417849066s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-203208
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-203208: (1m32.007722664s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0307 18:47:15.578245   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:50:08.837664   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:50:18.626162   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:50:25.776671   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:52:15.578647   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:53:11.889139   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:55:08.839268   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:55:25.776671   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:57:15.578878   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 19:00:08.826062   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 19:00:08.838264   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 19:00:25.776761   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
preload_test.go:71: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: exit status 109 (13m36.2770409s)

                                                
                                                
-- stdout --
	* [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node test-preload-203208 in cluster test-preload-203208
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing kvm2 VM for "test-preload-203208" ...
	* Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:47:08.188999   26384 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:47:08.189163   26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:47:08.189221   26384 out.go:309] Setting ErrFile to fd 2...
	I0307 18:47:08.189235   26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:47:08.189633   26384 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:47:08.190229   26384 out.go:303] Setting JSON to false
	I0307 18:47:08.191033   26384 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5376,"bootTime":1678209452,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:47:08.191096   26384 start.go:135] virtualization: kvm guest
	I0307 18:47:08.193540   26384 out.go:177] * [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:47:08.195219   26384 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:47:08.196770   26384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:47:08.195178   26384 notify.go:220] Checking for updates...
	I0307 18:47:08.198392   26384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:47:08.199832   26384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:47:08.201253   26384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:47:08.202663   26384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:47:08.204748   26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0307 18:47:08.205285   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:08.205342   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:08.220069   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0307 18:47:08.220563   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:08.221076   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:08.221096   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:08.221432   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:08.221584   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:08.223753   26384 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	I0307 18:47:08.225235   26384 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:47:08.225524   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:08.225572   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:08.239705   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0307 18:47:08.240091   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:08.240557   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:08.240573   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:08.240906   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:08.241120   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:08.275331   26384 out.go:177] * Using the kvm2 driver based on existing profile
	I0307 18:47:08.276690   26384 start.go:296] selected driver: kvm2
	I0307 18:47:08.276702   26384 start.go:857] validating driver "kvm2" against &{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:47:08.276795   26384 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:47:08.277360   26384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:47:08.277421   26384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0307 18:47:08.291366   26384 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0307 18:47:08.291664   26384 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:47:08.291694   26384 cni.go:84] Creating CNI manager for ""
	I0307 18:47:08.291705   26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:47:08.291717   26384 start_flags.go:319] config:
	{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:47:08.291838   26384 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:47:08.293852   26384 out.go:177] * Starting control plane node test-preload-203208 in cluster test-preload-203208
	I0307 18:47:08.296143   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:08.450857   26384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0307 18:47:08.450906   26384 cache.go:57] Caching tarball of preloaded images
	I0307 18:47:08.451048   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:08.453213   26384 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0307 18:47:08.454642   26384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:08.614514   26384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0307 18:47:32.826448   26384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:32.826536   26384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:33.690125   26384 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on containerd
	I0307 18:47:33.690264   26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
	I0307 18:47:33.690465   26384 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:47:33.690499   26384 start.go:364] acquiring machines lock for test-preload-203208: {Name:mk86d1042b74b1a783c77f2a2445172eb6d30958 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 18:47:33.690551   26384 start.go:368] acquired machines lock for "test-preload-203208" in 35.693µs
	I0307 18:47:33.690566   26384 start.go:96] Skipping create...Using existing machine configuration
	I0307 18:47:33.690574   26384 fix.go:55] fixHost starting: 
	I0307 18:47:33.690832   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:33.690865   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:33.704555   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0307 18:47:33.704995   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:33.705526   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:33.705549   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:33.705815   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:33.706046   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:33.706249   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetState
	I0307 18:47:33.707747   26384 fix.go:103] recreateIfNeeded on test-preload-203208: state=Stopped err=<nil>
	I0307 18:47:33.707767   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	W0307 18:47:33.707933   26384 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 18:47:33.710555   26384 out.go:177] * Restarting existing kvm2 VM for "test-preload-203208" ...
	I0307 18:47:33.712032   26384 main.go:141] libmachine: (test-preload-203208) Calling .Start
	I0307 18:47:33.712220   26384 main.go:141] libmachine: (test-preload-203208) Ensuring networks are active...
	I0307 18:47:33.712842   26384 main.go:141] libmachine: (test-preload-203208) Ensuring network default is active
	I0307 18:47:33.713296   26384 main.go:141] libmachine: (test-preload-203208) Ensuring network mk-test-preload-203208 is active
	I0307 18:47:33.713652   26384 main.go:141] libmachine: (test-preload-203208) Getting domain xml...
	I0307 18:47:33.714346   26384 main.go:141] libmachine: (test-preload-203208) Creating domain...
	I0307 18:47:34.910876   26384 main.go:141] libmachine: (test-preload-203208) Waiting to get IP...
	I0307 18:47:34.911746   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:34.912163   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:34.912255   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:34.912165   26419 retry.go:31] will retry after 212.425256ms: waiting for machine to come up
	I0307 18:47:35.126663   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.127105   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.127129   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.127053   26419 retry.go:31] will retry after 263.969499ms: waiting for machine to come up
	I0307 18:47:35.392652   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.393060   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.393084   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.393015   26419 retry.go:31] will retry after 468.684911ms: waiting for machine to come up
	I0307 18:47:35.863601   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.864010   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.864033   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.863947   26419 retry.go:31] will retry after 431.412452ms: waiting for machine to come up
	I0307 18:47:36.296448   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:36.296882   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:36.296912   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:36.296828   26419 retry.go:31] will retry after 752.77311ms: waiting for machine to come up
	I0307 18:47:37.050685   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:37.051090   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:37.051119   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.051041   26419 retry.go:31] will retry after 743.261623ms: waiting for machine to come up
	I0307 18:47:37.795856   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:37.796272   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:37.796308   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.796215   26419 retry.go:31] will retry after 1.170690029s: waiting for machine to come up
	I0307 18:47:38.968781   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:38.969233   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:38.969258   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:38.969184   26419 retry.go:31] will retry after 1.337094513s: waiting for machine to come up
	I0307 18:47:40.308636   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:40.309023   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:40.309045   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:40.308986   26419 retry.go:31] will retry after 1.490851661s: waiting for machine to come up
	I0307 18:47:41.801795   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:41.802239   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:41.802269   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:41.802176   26419 retry.go:31] will retry after 2.070649174s: waiting for machine to come up
	I0307 18:47:43.874879   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:43.875349   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:43.875380   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:43.875281   26419 retry.go:31] will retry after 2.737681725s: waiting for machine to come up
	I0307 18:47:46.616128   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:46.616688   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:46.616712   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:46.616637   26419 retry.go:31] will retry after 2.87929565s: waiting for machine to come up
	I0307 18:47:49.497470   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:49.498002   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:49.498030   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:49.497932   26419 retry.go:31] will retry after 4.103227875s: waiting for machine to come up
	I0307 18:47:53.606187   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.606663   26384 main.go:141] libmachine: (test-preload-203208) Found IP for machine: 192.168.39.212
	I0307 18:47:53.606696   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has current primary IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.606703   26384 main.go:141] libmachine: (test-preload-203208) Reserving static IP address...
	I0307 18:47:53.607103   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.607138   26384 main.go:141] libmachine: (test-preload-203208) Reserved static IP address: 192.168.39.212
	I0307 18:47:53.607159   26384 main.go:141] libmachine: (test-preload-203208) DBG | skip adding static IP to network mk-test-preload-203208 - found existing host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"}
	I0307 18:47:53.607180   26384 main.go:141] libmachine: (test-preload-203208) DBG | Getting to WaitForSSH function...
	I0307 18:47:53.607195   26384 main.go:141] libmachine: (test-preload-203208) Waiting for SSH to be available...
	I0307 18:47:53.609451   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.609920   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.609952   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.610021   26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH client type: external
	I0307 18:47:53.610088   26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH private key: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa (-rw-------)
	I0307 18:47:53.610128   26384 main.go:141] libmachine: (test-preload-203208) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0307 18:47:53.610153   26384 main.go:141] libmachine: (test-preload-203208) DBG | About to run SSH command:
	I0307 18:47:53.610166   26384 main.go:141] libmachine: (test-preload-203208) DBG | exit 0
	I0307 18:47:53.693376   26384 main.go:141] libmachine: (test-preload-203208) DBG | SSH cmd err, output: <nil>: 
	I0307 18:47:53.693716   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetConfigRaw
	I0307 18:47:53.694380   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:53.696583   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.696983   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.697018   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.697232   26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
	I0307 18:47:53.697422   26384 machine.go:88] provisioning docker machine ...
	I0307 18:47:53.697443   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:53.697627   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.697782   26384 buildroot.go:166] provisioning hostname "test-preload-203208"
	I0307 18:47:53.697798   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.697947   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.699860   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.700195   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.700225   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.700341   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:53.700502   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.700619   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.700716   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:53.700853   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:53.701264   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:53.701276   26384 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-203208 && echo "test-preload-203208" | sudo tee /etc/hostname
	I0307 18:47:53.818077   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-203208
	
	I0307 18:47:53.818106   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.820950   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.821308   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.821334   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.821486   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:53.821689   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.821852   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.822005   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:53.822192   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:53.822574   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:53.822590   26384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-203208' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-203208/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-203208' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:47:53.938498   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:47:53.938531   26384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4052/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4052/.minikube}
	I0307 18:47:53.938554   26384 buildroot.go:174] setting up certificates
	I0307 18:47:53.938564   26384 provision.go:83] configureAuth start
	I0307 18:47:53.938577   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.938823   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:53.941788   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.942174   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.942193   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.942389   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.944344   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.944651   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.944679   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.944819   26384 provision.go:138] copyHostCerts
	I0307 18:47:53.944864   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem, removing ...
	I0307 18:47:53.944874   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem
	I0307 18:47:53.944936   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem (1123 bytes)
	I0307 18:47:53.945028   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem, removing ...
	I0307 18:47:53.945042   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem
	I0307 18:47:53.945069   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem (1679 bytes)
	I0307 18:47:53.945118   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem, removing ...
	I0307 18:47:53.945125   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem
	I0307 18:47:53.945144   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem (1078 bytes)
	I0307 18:47:53.945185   26384 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem org=jenkins.test-preload-203208 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube test-preload-203208]
	I0307 18:47:54.280078   26384 provision.go:172] copyRemoteCerts
	I0307 18:47:54.280140   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:47:54.280162   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.282745   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.283051   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.283081   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.283221   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.283408   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.283548   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.283668   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.366577   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 18:47:54.389837   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0307 18:47:54.411718   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 18:47:54.433964   26384 provision.go:86] duration metric: configureAuth took 495.388641ms
	I0307 18:47:54.433989   26384 buildroot.go:189] setting minikube options for container-runtime
	I0307 18:47:54.434187   26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0307 18:47:54.434202   26384 machine.go:91] provisioned docker machine in 736.766542ms
	I0307 18:47:54.434211   26384 start.go:300] post-start starting for "test-preload-203208" (driver="kvm2")
	I0307 18:47:54.434220   26384 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:47:54.434345   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.434642   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:47:54.434666   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.437421   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.437782   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.437822   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.437973   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.438168   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.438298   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.438399   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.518617   26384 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:47:54.522870   26384 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 18:47:54.522893   26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/addons for local assets ...
	I0307 18:47:54.522953   26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/files for local assets ...
	I0307 18:47:54.523037   26384 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem -> 111062.pem in /etc/ssl/certs
	I0307 18:47:54.523135   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:47:54.530858   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /etc/ssl/certs/111062.pem (1708 bytes)
	I0307 18:47:54.553945   26384 start.go:303] post-start completed in 119.718718ms
	I0307 18:47:54.553971   26384 fix.go:57] fixHost completed within 20.863395553s
	I0307 18:47:54.553997   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.556837   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.557183   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.557209   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.557405   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.557590   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.557727   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.557837   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.558046   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:54.558428   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:54.558440   26384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 18:47:54.666375   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214874.615825414
	
	I0307 18:47:54.666396   26384 fix.go:207] guest clock: 1678214874.615825414
	I0307 18:47:54.666406   26384 fix.go:220] Guest: 2023-03-07 18:47:54.615825414 +0000 UTC Remote: 2023-03-07 18:47:54.553975557 +0000 UTC m=+46.403616421 (delta=61.849857ms)
	I0307 18:47:54.666428   26384 fix.go:191] guest clock delta is within tolerance: 61.849857ms
	I0307 18:47:54.666435   26384 start.go:83] releasing machines lock for "test-preload-203208", held for 20.975873468s
	I0307 18:47:54.666460   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.666725   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:54.669426   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.669811   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.669848   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.669973   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670422   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670589   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670656   26384 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0307 18:47:54.670718   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.670826   26384 ssh_runner.go:195] Run: cat /version.json
	I0307 18:47:54.670851   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.673445   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673511   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673800   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.673827   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673938   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.673967   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.674023   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.674214   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.674218   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.674394   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.674402   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.674565   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.674569   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.674704   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.759342   26384 ssh_runner.go:195] Run: systemctl --version
	I0307 18:47:54.887421   26384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 18:47:54.893321   26384 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 18:47:54.893397   26384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:47:54.911277   26384 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 18:47:54.911299   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:54.911409   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:47:58.947601   26384 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.036162087s)
	I0307 18:47:58.947737   26384 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0307 18:47:58.947802   26384 ssh_runner.go:195] Run: which lz4
	I0307 18:47:58.951928   26384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 18:47:58.955886   26384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 18:47:58.955917   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0307 18:48:00.759696   26384 containerd.go:551] Took 1.807807 seconds to copy over tarball
	I0307 18:48:00.759760   26384 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 18:48:03.914699   26384 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.15491167s)
	I0307 18:48:03.914730   26384 containerd.go:558] Took 3.155008 seconds to extract the tarball
	I0307 18:48:03.914761   26384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 18:48:03.954806   26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:48:04.051307   26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:48:04.067055   26384 start.go:485] detecting cgroup driver to use...
	I0307 18:48:04.067143   26384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 18:48:06.737555   26384 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.670382401s)
	I0307 18:48:06.737634   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:48:06.749559   26384 docker.go:186] disabling cri-docker service (if available) ...
	I0307 18:48:06.749615   26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 18:48:06.761329   26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 18:48:06.773038   26384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 18:48:06.870678   26384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 18:48:06.979667   26384 docker.go:202] disabling docker service ...
	I0307 18:48:06.979735   26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 18:48:06.992492   26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 18:48:07.004415   26384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 18:48:07.107126   26384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 18:48:07.218342   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 18:48:07.230717   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:48:07.248387   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 18:48:07.257036   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:48:07.266682   26384 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:48:07.266740   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:48:07.276084   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:48:07.285768   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:48:07.295044   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:48:07.304543   26384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:48:07.314540   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:48:07.324106   26384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:48:07.332553   26384 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0307 18:48:07.332592   26384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0307 18:48:07.345783   26384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:48:07.354423   26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:48:07.450860   26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:48:07.472878   26384 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 18:48:07.472979   26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:48:07.480739   26384 retry.go:31] will retry after 1.355526534s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0307 18:48:08.836380   26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:48:08.842045   26384 start.go:553] Will wait 60s for crictl version
	I0307 18:48:08.842108   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:48:08.846136   26384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:48:08.879500   26384 start.go:569] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.18
	RuntimeApiVersion:  v1alpha2
	I0307 18:48:08.879555   26384 ssh_runner.go:195] Run: containerd --version
	I0307 18:48:08.907039   26384 ssh_runner.go:195] Run: containerd --version
	I0307 18:48:08.937824   26384 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
	I0307 18:48:08.939189   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:48:08.941766   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:48:08.942253   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:48:08.942274   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:48:08.942470   26384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0307 18:48:08.946333   26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:48:08.958372   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:48:08.958447   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:48:08.984433   26384 containerd.go:608] all images are preloaded for containerd runtime.
	I0307 18:48:08.984454   26384 containerd.go:522] Images already preloaded, skipping extraction
	I0307 18:48:08.984503   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:48:09.011132   26384 containerd.go:608] all images are preloaded for containerd runtime.
	I0307 18:48:09.011156   26384 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:48:09.011204   26384 ssh_runner.go:195] Run: sudo crictl info
	I0307 18:48:09.039874   26384 cni.go:84] Creating CNI manager for ""
	I0307 18:48:09.039898   26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:48:09.039907   26384 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:48:09.039928   26384 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-203208 NodeName:test-preload-203208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:48:09.040095   26384 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-203208"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:48:09.040202   26384 kubeadm.go:968] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-203208 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:48:09.040264   26384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0307 18:48:09.049030   26384 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:48:09.049088   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:48:09.057226   26384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
	I0307 18:48:09.073102   26384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:48:09.087939   26384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0307 18:48:09.103091   26384 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0307 18:48:09.106714   26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:48:09.118609   26384 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208 for IP: 192.168.39.212
	I0307 18:48:09.118642   26384 certs.go:186] acquiring lock for shared ca certs: {Name:mk07c09235b5b83043c0b2b2f22c2249661f377a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:48:09.118791   26384 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key
	I0307 18:48:09.118849   26384 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key
	I0307 18:48:09.118912   26384 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key
	I0307 18:48:09.118967   26384 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key.543da273
	I0307 18:48:09.119053   26384 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key
	I0307 18:48:09.119150   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem (1338 bytes)
	W0307 18:48:09.119182   26384 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106_empty.pem, impossibly tiny 0 bytes
	I0307 18:48:09.119193   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:48:09.119222   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem (1078 bytes)
	I0307 18:48:09.119259   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:48:09.119296   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem (1679 bytes)
	I0307 18:48:09.119354   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem (1708 bytes)
	I0307 18:48:09.119887   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 18:48:09.142561   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:48:09.164647   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:48:09.186856   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 18:48:09.209055   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:48:09.233821   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 18:48:09.256607   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:48:09.279276   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:48:09.301654   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:48:09.323040   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem --> /usr/share/ca-certificates/11106.pem (1338 bytes)
	I0307 18:48:09.344849   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /usr/share/ca-certificates/111062.pem (1708 bytes)
	I0307 18:48:09.366857   26384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:48:09.382598   26384 ssh_runner.go:195] Run: openssl version
	I0307 18:48:09.387988   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:48:09.396852   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.401359   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:03 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.401436   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.406740   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:48:09.415682   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11106.pem && ln -fs /usr/share/ca-certificates/11106.pem /etc/ssl/certs/11106.pem"
	I0307 18:48:09.424547   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.428975   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:09 /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.429015   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.434193   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11106.pem /etc/ssl/certs/51391683.0"
	I0307 18:48:09.443361   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111062.pem && ln -fs /usr/share/ca-certificates/111062.pem /etc/ssl/certs/111062.pem"
	I0307 18:48:09.452688   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.457057   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:09 /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.457108   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.462237   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111062.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:48:09.471411   26384 kubeadm.go:401] StartCluster: {Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:48:09.471554   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 18:48:09.471596   26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:48:09.501095   26384 cri.go:87] found id: ""
	I0307 18:48:09.501172   26384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:48:09.510140   26384 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0307 18:48:09.510163   26384 kubeadm.go:633] restartCluster start
	I0307 18:48:09.510218   26384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 18:48:09.518643   26384 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:09.519032   26384 kubeconfig.go:135] verify returned: extract IP: "test-preload-203208" does not appear in /home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:48:09.519129   26384 kubeconfig.go:146] "test-preload-203208" context is missing from /home/jenkins/minikube-integration/15985-4052/kubeconfig - will repair!
	I0307 18:48:09.519386   26384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4052/kubeconfig: {Name:mk89c8bdc0292c804b7314ba2438e95e1215b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:48:09.519958   26384 kapi.go:59] client config for test-preload-203208: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:48:09.520801   26384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 18:48:09.528914   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:09.528956   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:09.538990   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:10.039696   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:10.039767   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:10.050769   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:10.539371   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:10.539470   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:10.550785   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:11.039988   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:11.040093   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:11.051278   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:11.539936   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:11.540040   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:11.551371   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:12.040000   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:12.040077   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:12.051583   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:12.539114   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:12.539176   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:12.550419   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:13.040079   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:13.040172   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:13.052432   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:13.540058   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:13.540141   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:13.551703   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:14.039765   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:14.039847   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:14.051403   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:14.540016   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:14.540094   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:14.552136   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:15.039754   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:15.039852   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:15.051397   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:15.539956   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:15.540068   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:15.551741   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:16.039191   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:16.039261   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:16.050954   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:16.539468   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:16.539533   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:16.550947   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:17.039455   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:17.039523   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:17.050527   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:17.539123   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:17.539207   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:17.551333   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:18.039916   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:18.039999   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:18.051774   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:18.539677   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:18.539783   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:18.551481   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.039543   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.039622   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.051157   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.539906   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.539971   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.551522   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.551546   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.551615   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.562103   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.562127   26384 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0307 18:48:19.562135   26384 kubeadm.go:1120] stopping kube-system containers ...
	I0307 18:48:19.562145   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0307 18:48:19.562200   26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:48:19.596473   26384 cri.go:87] found id: ""
	I0307 18:48:19.596545   26384 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 18:48:19.611484   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:48:19.620277   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:48:19.620347   26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:48:19.629402   26384 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0307 18:48:19.629420   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:19.729048   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:20.693486   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.045927   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.125427   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.208989   26384 api_server.go:51] waiting for apiserver process to appear ...
	I0307 18:48:21.209053   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:21.727096   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:22.226678   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:22.726635   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:23.227460   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:23.726652   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:24.226895   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:24.727601   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:25.227632   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:25.727342   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:26.226885   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:26.727250   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:27.226755   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:27.727168   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:28.227623   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:28.726792   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:29.227535   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:29.727199   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:30.227533   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:30.726863   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:31.226913   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:31.726742   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:32.226629   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:32.726562   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:33.227256   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:33.727095   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:34.227636   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:34.727529   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:35.226672   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:35.239643   26384 api_server.go:71] duration metric: took 14.030659958s to wait for apiserver process to appear ...
	I0307 18:48:35.239673   26384 api_server.go:87] waiting for apiserver healthz status ...
	I0307 18:48:35.239689   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:40.240554   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:40.741289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:45.742137   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:46.240766   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:51.241530   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:51.740794   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:55.622725   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:40614->192.168.39.212:8443: read: connection reset by peer
	I0307 18:48:55.741069   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:55.741730   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:56.241350   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:56.241974   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:56.741625   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:56.742311   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:57.240872   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:57.241486   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:57.741098   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:57.741815   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:58.240688   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:58.241449   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:58.740916   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:58.741450   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:59.241002   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:59.241562   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:59.741376   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:59.741967   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:00.241554   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:00.242185   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:00.740765   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:00.741366   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:01.240922   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:01.241524   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:01.741093   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:01.741672   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:02.241289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:02.241821   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:02.741466   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:02.742055   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:03.240707   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:03.241321   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:03.741112   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:03.741706   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:04.241289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:04.241805   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:04.741475   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:04.742120   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:05.240659   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:05.241205   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:05.740827   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:05.741407   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:06.240957   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:06.241520   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:06.741097   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:06.741687   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:07.241323   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:07.241898   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:07.741557   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:07.742492   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:08.241389   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:08.242007   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:08.741481   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:08.742046   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:09.240755   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:09.241344   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:09.741175   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:09.741776   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:10.241384   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:10.242065   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:10.741689   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:10.742367   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:11.240908   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:11.241508   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:11.741066   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:11.741702   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:12.241340   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:12.241992   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:12.741591   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:12.742200   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:13.240991   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:13.241618   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:13.741474   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:13.742095   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:14.240668   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:14.241302   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:14.740851   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:14.741426   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:15.240983   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:15.241592   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:15.741169   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:15.741706   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:16.241315   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:16.241927   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:16.741520   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:16.742200   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:17.240744   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:17.241351   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:17.740916   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:22.742180   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:23.240982   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:28.241459   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:28.740696   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:33.740940   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:34.241557   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:37.998029   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:36774->192.168.39.212:8443: read: connection reset by peer
	I0307 18:49:38.240706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:38.240797   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:38.274793   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:38.274811   26384 cri.go:87] found id: "5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	I0307 18:49:38.274816   26384 cri.go:87] found id: ""
	I0307 18:49:38.274822   26384 logs.go:277] 2 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]
	I0307 18:49:38.274884   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.279183   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.283139   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:38.283194   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:38.310826   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:38.310844   26384 cri.go:87] found id: ""
	I0307 18:49:38.310850   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:38.310891   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.314471   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:38.314538   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:38.344851   26384 cri.go:87] found id: ""
	I0307 18:49:38.344881   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.344889   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:38.344894   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:38.344965   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:38.377525   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:38.377548   26384 cri.go:87] found id: ""
	I0307 18:49:38.377555   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:38.377609   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.381815   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:38.381869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:38.417825   26384 cri.go:87] found id: ""
	I0307 18:49:38.417845   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.417851   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:38.417855   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:38.417925   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:38.454042   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:38.454062   26384 cri.go:87] found id: "a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	I0307 18:49:38.454066   26384 cri.go:87] found id: ""
	I0307 18:49:38.454073   26384 logs.go:277] 2 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]
	I0307 18:49:38.454130   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.458203   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.461976   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:38.462036   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:38.498530   26384 cri.go:87] found id: ""
	I0307 18:49:38.498555   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.498566   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:38.498573   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:38.498623   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:38.545888   26384 cri.go:87] found id: ""
	I0307 18:49:38.545918   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.545926   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:38.545936   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:38.545952   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:38.596180   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:38.596211   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:38.657673   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:38.657718   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:38.670963   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:38.670998   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:38.710963   26384 logs.go:123] Gathering logs for kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a] ...
	I0307 18:49:38.710992   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	W0307 18:49:38.740233   26384 logs.go:130] failed kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:38.717772    1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:38.717772    1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
	
	** /stderr **
	I0307 18:49:38.740259   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:38.740272   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:38.769176   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:38.769208   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:38.816001   26384 logs.go:123] Gathering logs for kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4] ...
	I0307 18:49:38.816029   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	W0307 18:49:38.847807   26384 logs.go:130] failed kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:38.825690    1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:38.825690    1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
	
	** /stderr **
	I0307 18:49:38.847829   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:38.847839   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:38.960358   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:38.960378   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:38.960391   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:39.024178   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:39.024209   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:41.561116   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:41.561705   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:41.741078   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:41.741163   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:41.770944   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:41.770967   26384 cri.go:87] found id: ""
	I0307 18:49:41.770975   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:41.771032   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.774913   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:41.774977   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:41.802816   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:41.802838   26384 cri.go:87] found id: ""
	I0307 18:49:41.802847   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:41.802892   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.806570   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:41.806610   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:41.835237   26384 cri.go:87] found id: ""
	I0307 18:49:41.835270   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.835276   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:41.835281   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:41.835337   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:41.870305   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:41.870323   26384 cri.go:87] found id: ""
	I0307 18:49:41.870329   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:41.870376   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.874332   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:41.874383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:41.901971   26384 cri.go:87] found id: ""
	I0307 18:49:41.901993   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.901999   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:41.902005   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:41.902057   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:41.929792   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:41.929823   26384 cri.go:87] found id: ""
	I0307 18:49:41.929834   26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:41.929885   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.933861   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:41.933945   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:41.962195   26384 cri.go:87] found id: ""
	I0307 18:49:41.962222   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.962230   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:41.962237   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:41.962290   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:41.990939   26384 cri.go:87] found id: ""
	I0307 18:49:41.990965   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.990972   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:41.990984   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:41.990994   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:42.052031   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:42.052054   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:42.052069   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:42.081594   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:42.081622   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:42.109456   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:42.109493   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:42.177139   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:42.177180   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:42.226652   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:42.226679   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:42.287629   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:42.287659   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:42.299095   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:42.299115   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:42.340655   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:42.340684   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:44.881007   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:44.881568   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:45.241058   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:45.241130   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:45.268565   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:45.268588   26384 cri.go:87] found id: ""
	I0307 18:49:45.268596   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:45.268650   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.272618   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:45.272685   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:45.299447   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:45.299471   26384 cri.go:87] found id: ""
	I0307 18:49:45.299479   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:45.299528   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.303332   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:45.303397   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:45.332836   26384 cri.go:87] found id: ""
	I0307 18:49:45.332863   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.332873   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:45.332881   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:45.332989   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:45.359776   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:45.359795   26384 cri.go:87] found id: ""
	I0307 18:49:45.359805   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:45.359864   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.363663   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:45.363725   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:45.389419   26384 cri.go:87] found id: ""
	I0307 18:49:45.389448   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.389459   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:45.389465   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:45.389523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:45.415773   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:45.415796   26384 cri.go:87] found id: ""
	I0307 18:49:45.415804   26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:45.415860   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.419687   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:45.419754   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:45.448748   26384 cri.go:87] found id: ""
	I0307 18:49:45.448777   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.448786   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:45.448791   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:45.448854   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:45.474641   26384 cri.go:87] found id: ""
	I0307 18:49:45.474669   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.474679   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:45.474696   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:45.474711   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:45.486226   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:45.486249   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:45.545694   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:45.545714   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:45.545726   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:45.591466   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:45.591493   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:45.623810   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:45.623841   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:45.686240   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:45.686268   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:45.720278   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:45.720302   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:45.745876   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:45.745913   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:45.809485   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:45.809518   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:48.348770   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:48.349502   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:48.741584   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:48.741651   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:48.777550   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:48.777572   26384 cri.go:87] found id: ""
	I0307 18:49:48.777578   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:48.777636   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.782172   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:48.782233   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:48.818792   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:48.818817   26384 cri.go:87] found id: ""
	I0307 18:49:48.818824   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:48.818869   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.823044   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:48.823106   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:48.857459   26384 cri.go:87] found id: ""
	I0307 18:49:48.857484   26384 logs.go:277] 0 containers: []
	W0307 18:49:48.857491   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:48.857498   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:48.857556   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:48.889707   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:48.889728   26384 cri.go:87] found id: ""
	I0307 18:49:48.889735   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:48.889778   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.894345   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:48.894420   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:48.933590   26384 cri.go:87] found id: ""
	I0307 18:49:48.933610   26384 logs.go:277] 0 containers: []
	W0307 18:49:48.933617   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:48.933622   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:48.933667   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:48.967476   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:48.967495   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:48.967499   26384 cri.go:87] found id: ""
	I0307 18:49:48.967506   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:48.967549   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.971759   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.975656   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:48.975714   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:49.026784   26384 cri.go:87] found id: ""
	I0307 18:49:49.026821   26384 logs.go:277] 0 containers: []
	W0307 18:49:49.026831   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:49.026839   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:49.026900   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:49.055435   26384 cri.go:87] found id: ""
	I0307 18:49:49.055458   26384 logs.go:277] 0 containers: []
	W0307 18:49:49.055465   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:49.055476   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:49.055490   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:49.089020   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:49.089048   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:49.138877   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:49.138913   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:49.153088   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:49.153113   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:49.220054   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:49.220079   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:49.220098   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:49.260102   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:49.260132   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:49.288829   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:49.288855   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:49.360373   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:49.360411   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:49.390432   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:49.390471   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:49.438326   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:49.438360   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:51.999825   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:52.000476   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:52.240790   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:52.240869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:52.268760   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:52.268782   26384 cri.go:87] found id: ""
	I0307 18:49:52.268790   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:52.268860   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.273290   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:52.273355   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:52.303004   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:52.303024   26384 cri.go:87] found id: ""
	I0307 18:49:52.303031   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:52.303070   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.307394   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:52.307454   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:52.334227   26384 cri.go:87] found id: ""
	I0307 18:49:52.334252   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.334259   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:52.334263   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:52.334308   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:52.365944   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:52.365964   26384 cri.go:87] found id: ""
	I0307 18:49:52.365971   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:52.366014   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.369575   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:52.369631   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:52.399970   26384 cri.go:87] found id: ""
	I0307 18:49:52.399998   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.400008   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:52.400015   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:52.400080   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:52.428372   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:52.428394   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:52.428399   26384 cri.go:87] found id: ""
	I0307 18:49:52.428404   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:52.428452   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.432426   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.436419   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:52.436468   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:52.465745   26384 cri.go:87] found id: ""
	I0307 18:49:52.465777   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.465786   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:52.465794   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:52.465851   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:52.493993   26384 cri.go:87] found id: ""
	I0307 18:49:52.494022   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.494032   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:52.494048   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:52.494063   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:52.562310   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:52.562349   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:52.601842   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:52.601867   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:52.663702   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:52.663735   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:52.676175   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:52.676205   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:52.725457   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:52.725478   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:52.725491   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:52.773421   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:52.773446   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:52.820180   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:52.820212   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:52.854035   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:52.854060   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:52.882963   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:52.882993   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:55.412727   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:55.413292   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:55.740694   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:55.740782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:55.769593   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:55.769617   26384 cri.go:87] found id: ""
	I0307 18:49:55.769624   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:55.769675   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.773846   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:55.773918   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:55.799820   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:55.799844   26384 cri.go:87] found id: ""
	I0307 18:49:55.799852   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:55.799904   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.803655   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:55.803714   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:55.830795   26384 cri.go:87] found id: ""
	I0307 18:49:55.830820   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.830829   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:55.830840   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:55.830892   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:55.861486   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:55.861511   26384 cri.go:87] found id: ""
	I0307 18:49:55.861519   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:55.861571   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.865664   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:55.865712   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:55.892035   26384 cri.go:87] found id: ""
	I0307 18:49:55.892057   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.892067   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:55.892074   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:55.892122   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:55.921473   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:55.921491   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:55.921503   26384 cri.go:87] found id: ""
	I0307 18:49:55.921511   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:55.921560   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.925654   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.929475   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:55.929539   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:55.956526   26384 cri.go:87] found id: ""
	I0307 18:49:55.956559   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.956566   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:55.956571   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:55.956614   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:55.983852   26384 cri.go:87] found id: ""
	I0307 18:49:55.983873   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.983879   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:55.983891   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:55.983905   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:56.013373   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:56.013404   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:56.075477   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:56.075514   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:56.134932   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:56.134953   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:56.134963   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:56.162676   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:56.162702   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:56.205835   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:56.205864   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:56.254193   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:56.254226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:56.291170   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:56.291199   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:56.303219   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:56.303244   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:56.338501   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:56.338530   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:58.906800   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:58.907377   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:59.240745   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:59.240816   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:59.270117   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:59.270138   26384 cri.go:87] found id: ""
	I0307 18:49:59.270148   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:59.270194   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.277486   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:59.277555   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:59.319990   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:59.320008   26384 cri.go:87] found id: ""
	I0307 18:49:59.320015   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:59.320056   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.324577   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:59.324620   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:59.355279   26384 cri.go:87] found id: ""
	I0307 18:49:59.355308   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.355318   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:59.355325   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:59.355383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:59.385970   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:59.386019   26384 cri.go:87] found id: ""
	I0307 18:49:59.386029   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:59.386084   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.389898   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:59.389957   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:59.418100   26384 cri.go:87] found id: ""
	I0307 18:49:59.418123   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.418132   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:59.418141   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:59.418199   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:59.448963   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:59.448984   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:59.448990   26384 cri.go:87] found id: ""
	I0307 18:49:59.448998   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:59.449053   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.452973   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.456699   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:59.456745   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:59.487041   26384 cri.go:87] found id: ""
	I0307 18:49:59.487066   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.487075   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:59.487081   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:59.487141   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:59.520702   26384 cri.go:87] found id: ""
	I0307 18:49:59.520733   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.520744   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:59.520756   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:59.520770   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:59.534981   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:59.535020   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:59.571150   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:59.571176   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:59.608785   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:59.608815   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	W0307 18:49:59.635030   26384 logs.go:130] failed kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:59.613980    2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:59.613980    2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
	
	** /stderr **
	I0307 18:49:59.635047   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:59.635057   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:59.681919   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:59.681947   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:59.738173   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:59.738205   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:59.789970   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:59.789991   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:59.790005   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:59.859269   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:59.859302   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:59.901677   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:59.901708   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:02.439332   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:07.439703   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:50:07.741227   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:07.741304   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:07.771935   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:07.771958   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:50:07.771964   26384 cri.go:87] found id: ""
	I0307 18:50:07.771972   26384 logs.go:277] 2 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:50:07.772033   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.775931   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.779533   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:07.779583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:07.807355   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:07.807372   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:50:07.807376   26384 cri.go:87] found id: ""
	I0307 18:50:07.807382   26384 logs.go:277] 2 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:50:07.807423   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.810941   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.814428   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:07.814480   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:07.840502   26384 cri.go:87] found id: ""
	I0307 18:50:07.840530   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.840537   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:07.840543   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:07.840590   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:07.872460   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:07.872482   26384 cri.go:87] found id: ""
	I0307 18:50:07.872490   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:07.872532   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.876167   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:07.876234   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:07.902163   26384 cri.go:87] found id: ""
	I0307 18:50:07.902185   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.902194   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:07.902203   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:07.902264   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:07.934206   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:07.934234   26384 cri.go:87] found id: ""
	I0307 18:50:07.934244   26384 logs.go:277] 1 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:07.934302   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.937973   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:07.938062   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:07.969362   26384 cri.go:87] found id: ""
	I0307 18:50:07.969395   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.969406   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:07.969413   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:07.969476   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:07.996288   26384 cri.go:87] found id: ""
	I0307 18:50:07.996313   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.996322   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:07.996332   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:07.996346   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:08.022863   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:08.022893   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:08.072434   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:08.072467   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:08.110215   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:08.110244   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:08.139123   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:50:08.139152   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:50:08.172722   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:08.172748   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:50:22.210905   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.038132901s)
	W0307 18:50:22.210954   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:22.210963   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:50:22.210973   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	W0307 18:50:22.243161   26384 logs.go:130] failed etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:50:22.230070    2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
	 output: 
	** stderr ** 
	E0307 18:50:22.230070    2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
	
	** /stderr **
	I0307 18:50:22.243182   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:22.243194   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:22.312610   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:22.312647   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:22.376483   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:22.376512   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:22.441347   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:22.441379   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:24.956249   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:24.956843   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:25.241295   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:25.241366   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:25.271038   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:25.271057   26384 cri.go:87] found id: ""
	I0307 18:50:25.271063   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:25.271112   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.275131   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:25.275189   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:25.304102   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:25.304122   26384 cri.go:87] found id: ""
	I0307 18:50:25.304131   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:25.304176   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.308112   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:25.308165   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:25.335593   26384 cri.go:87] found id: ""
	I0307 18:50:25.335621   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.335631   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:25.335639   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:25.335696   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:25.366744   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:25.366765   26384 cri.go:87] found id: ""
	I0307 18:50:25.366773   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:25.366814   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.370479   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:25.370523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:25.397628   26384 cri.go:87] found id: ""
	I0307 18:50:25.397651   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.397657   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:25.397662   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:25.397703   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:25.424370   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:25.424388   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:25.424392   26384 cri.go:87] found id: ""
	I0307 18:50:25.424399   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:25.424438   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.428375   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.432135   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:25.432197   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:25.464666   26384 cri.go:87] found id: ""
	I0307 18:50:25.464686   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.464693   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:25.464698   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:25.464754   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:25.495748   26384 cri.go:87] found id: ""
	I0307 18:50:25.495771   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.495778   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:25.495798   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:25.495816   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:25.552387   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:25.552409   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:25.552419   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:25.585072   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:25.585100   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:25.612624   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:25.612652   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:25.642351   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:25.642375   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:25.696054   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:25.696080   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:25.759230   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:25.759261   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:25.771377   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:25.771400   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:25.814932   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:25.814958   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:25.880431   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:25.880462   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:28.429316   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:28.430023   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:28.740900   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:28.740981   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:28.771490   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:28.771510   26384 cri.go:87] found id: ""
	I0307 18:50:28.771517   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:28.771573   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.775481   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:28.775544   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:28.803618   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:28.803637   26384 cri.go:87] found id: ""
	I0307 18:50:28.803644   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:28.803682   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.807610   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:28.807656   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:28.837030   26384 cri.go:87] found id: ""
	I0307 18:50:28.837048   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.837053   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:28.837058   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:28.837105   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:28.868318   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:28.868344   26384 cri.go:87] found id: ""
	I0307 18:50:28.868353   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:28.868412   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.872041   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:28.872096   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:28.900155   26384 cri.go:87] found id: ""
	I0307 18:50:28.900186   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.900195   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:28.900206   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:28.900266   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:28.928973   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:28.929007   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:28.929014   26384 cri.go:87] found id: ""
	I0307 18:50:28.929022   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:28.929080   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.932963   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.936674   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:28.936728   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:28.965932   26384 cri.go:87] found id: ""
	I0307 18:50:28.965955   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.965965   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:28.965972   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:28.966027   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:28.996172   26384 cri.go:87] found id: ""
	I0307 18:50:28.996202   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.996213   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:28.996230   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:28.996252   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:29.027476   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:29.027505   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:29.068982   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:29.069007   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:29.123121   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:29.123155   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:29.154965   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:29.154990   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:29.221021   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:29.221051   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:29.275777   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:29.275800   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:29.275817   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:29.305802   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:29.305836   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:29.374935   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:29.374971   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:29.404375   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:29.404401   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:31.916470   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:31.917095   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:32.241577   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:32.241647   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:32.273069   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:32.273102   26384 cri.go:87] found id: ""
	I0307 18:50:32.273108   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:32.273164   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.277800   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:32.277842   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:32.312694   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:32.312722   26384 cri.go:87] found id: ""
	I0307 18:50:32.312732   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:32.312778   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.316764   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:32.316809   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:32.348032   26384 cri.go:87] found id: ""
	I0307 18:50:32.348049   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.348054   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:32.348059   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:32.348116   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:32.382261   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:32.382286   26384 cri.go:87] found id: ""
	I0307 18:50:32.382297   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:32.382355   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.386519   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:32.386583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:32.423869   26384 cri.go:87] found id: ""
	I0307 18:50:32.423890   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.423897   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:32.423902   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:32.423964   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:32.461514   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:32.461538   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:32.461545   26384 cri.go:87] found id: ""
	I0307 18:50:32.461553   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:32.461606   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.465604   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.469437   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:32.469474   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:32.507355   26384 cri.go:87] found id: ""
	I0307 18:50:32.507376   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.507388   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:32.507395   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:32.507451   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:32.545202   26384 cri.go:87] found id: ""
	I0307 18:50:32.545230   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.545240   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:32.545257   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:32.545270   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:32.598969   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:32.598996   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:32.666940   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:32.666972   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:32.724486   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:32.724506   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:32.724516   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:32.758363   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:32.758389   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:32.838189   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:32.838228   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:32.891708   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:32.891740   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:32.903720   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:32.903746   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:32.936722   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:32.936745   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:32.969027   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:32.969055   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:35.524418   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:35.525031   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:35.741445   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:35.741534   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:35.771644   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:35.771665   26384 cri.go:87] found id: ""
	I0307 18:50:35.771673   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:35.771733   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.775944   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:35.776002   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:35.807438   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:35.807455   26384 cri.go:87] found id: ""
	I0307 18:50:35.807464   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:35.807512   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.811521   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:35.811577   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:35.839719   26384 cri.go:87] found id: ""
	I0307 18:50:35.839739   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.839746   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:35.839751   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:35.839801   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:35.870068   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:35.870089   26384 cri.go:87] found id: ""
	I0307 18:50:35.870096   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:35.870139   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.873953   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:35.874009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:35.907548   26384 cri.go:87] found id: ""
	I0307 18:50:35.907576   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.907584   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:35.907589   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:35.907648   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:35.938809   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:35.938828   26384 cri.go:87] found id: ""
	I0307 18:50:35.938834   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:35.938888   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.943995   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:35.944045   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:35.971387   26384 cri.go:87] found id: ""
	I0307 18:50:35.971406   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.971413   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:35.971420   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:35.971470   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:35.998911   26384 cri.go:87] found id: ""
	I0307 18:50:35.998938   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.998965   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:35.998982   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:35.999012   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:36.038815   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:36.038848   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:36.077044   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:36.077071   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:36.129558   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:36.129591   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:36.129604   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:36.166935   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:36.166960   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:36.195852   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:36.195882   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:36.271088   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:36.271123   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:36.326628   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:36.326662   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:36.389379   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:36.389411   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:38.901954   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:38.902491   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:39.240923   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:39.241009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:39.271083   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:39.271107   26384 cri.go:87] found id: ""
	I0307 18:50:39.271116   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:39.271171   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.275511   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:39.275567   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:39.306601   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:39.306618   26384 cri.go:87] found id: ""
	I0307 18:50:39.306625   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:39.306672   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.311169   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:39.311223   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:39.341921   26384 cri.go:87] found id: ""
	I0307 18:50:39.341940   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.341945   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:39.341951   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:39.342005   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:39.370475   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:39.370499   26384 cri.go:87] found id: ""
	I0307 18:50:39.370509   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:39.370560   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.374423   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:39.374480   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:39.404780   26384 cri.go:87] found id: ""
	I0307 18:50:39.404801   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.404809   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:39.404819   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:39.404877   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:39.435660   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:39.435684   26384 cri.go:87] found id: ""
	I0307 18:50:39.435692   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:39.435746   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.439799   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:39.439857   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:39.468225   26384 cri.go:87] found id: ""
	I0307 18:50:39.468250   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.468259   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:39.468267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:39.468325   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:39.500922   26384 cri.go:87] found id: ""
	I0307 18:50:39.500949   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.500958   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:39.500982   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:39.500995   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:39.530882   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:39.530921   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:39.600657   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:39.600685   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:39.649285   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:39.649317   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:39.697957   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:39.697989   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:39.759513   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:39.759544   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:39.772345   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:39.772373   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:39.831389   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:39.831411   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:39.831421   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:39.864274   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:39.864314   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:42.400891   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:42.401466   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:42.740872   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:42.740939   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:42.768431   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:42.768453   26384 cri.go:87] found id: ""
	I0307 18:50:42.768460   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:42.768513   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.772288   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:42.772331   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:42.798526   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:42.798553   26384 cri.go:87] found id: ""
	I0307 18:50:42.798562   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:42.798603   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.802234   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:42.802282   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:42.828743   26384 cri.go:87] found id: ""
	I0307 18:50:42.828762   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.828769   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:42.828774   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:42.828825   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:42.856471   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:42.856494   26384 cri.go:87] found id: ""
	I0307 18:50:42.856501   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:42.856546   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.860506   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:42.860571   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:42.886392   26384 cri.go:87] found id: ""
	I0307 18:50:42.886416   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.886423   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:42.886428   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:42.886474   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:42.913452   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:42.913478   26384 cri.go:87] found id: ""
	I0307 18:50:42.913487   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:42.913532   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.917323   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:42.917383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:42.943946   26384 cri.go:87] found id: ""
	I0307 18:50:42.943964   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.943970   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:42.943975   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:42.944025   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:42.969863   26384 cri.go:87] found id: ""
	I0307 18:50:42.969888   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.969896   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:42.969927   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:42.969944   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:43.027701   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:43.027737   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:43.041018   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:43.041051   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:43.090630   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:43.090658   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:43.090670   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:43.162692   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:43.162728   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:43.208000   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:43.208025   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:43.241826   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:43.241853   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:43.272472   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:43.272497   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:43.323281   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:43.323311   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:45.854952   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:45.855553   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:46.241035   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:46.241121   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:46.274554   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:46.274576   26384 cri.go:87] found id: ""
	I0307 18:50:46.274583   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:46.274637   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.278942   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:46.278994   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:46.307295   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:46.307313   26384 cri.go:87] found id: ""
	I0307 18:50:46.307320   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:46.307363   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.311114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:46.311163   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:46.341762   26384 cri.go:87] found id: ""
	I0307 18:50:46.341780   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.341787   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:46.341792   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:46.341852   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:46.374164   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:46.374187   26384 cri.go:87] found id: ""
	I0307 18:50:46.374196   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:46.374252   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.378131   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:46.378201   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:46.406158   26384 cri.go:87] found id: ""
	I0307 18:50:46.406176   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.406182   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:46.406188   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:46.406230   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:46.434896   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:46.434922   26384 cri.go:87] found id: ""
	I0307 18:50:46.434931   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:46.434985   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.438785   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:46.438842   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:46.469078   26384 cri.go:87] found id: ""
	I0307 18:50:46.469100   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.469107   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:46.469113   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:46.469178   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:46.500068   26384 cri.go:87] found id: ""
	I0307 18:50:46.500096   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.500105   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:46.500117   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:46.500128   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:46.537674   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:46.537702   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:46.599647   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:46.599677   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:46.611626   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:46.611656   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:46.664489   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:46.664513   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:46.664526   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:46.698473   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:46.698501   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:46.730118   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:46.730147   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:46.777380   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:46.777407   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:46.827387   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:46.827416   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:49.400363   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:49.400915   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:49.741647   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:49.741733   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:49.774027   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:49.774056   26384 cri.go:87] found id: ""
	I0307 18:50:49.774065   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:49.774123   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.778228   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:49.778286   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:49.807806   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:49.807832   26384 cri.go:87] found id: ""
	I0307 18:50:49.807841   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:49.807884   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.811537   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:49.811584   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:49.839443   26384 cri.go:87] found id: ""
	I0307 18:50:49.839468   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.839477   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:49.839485   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:49.839543   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:49.868206   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:49.868225   26384 cri.go:87] found id: ""
	I0307 18:50:49.868232   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:49.868273   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.871988   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:49.872029   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:49.903763   26384 cri.go:87] found id: ""
	I0307 18:50:49.903790   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.903802   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:49.903809   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:49.903869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:49.931386   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:49.931408   26384 cri.go:87] found id: ""
	I0307 18:50:49.931417   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:49.931470   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.935416   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:49.935472   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:49.964413   26384 cri.go:87] found id: ""
	I0307 18:50:49.964442   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.964451   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:49.964457   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:49.964519   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:49.995371   26384 cri.go:87] found id: ""
	I0307 18:50:49.995400   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.995410   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:49.995428   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:49.995443   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:50.027383   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:50.027415   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:50.102948   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:50.102987   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:50.153563   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:50.153595   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:50.187209   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:50.187240   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:50.252908   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:50.252940   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:50.265236   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:50.265260   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:50.319484   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:50.319506   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:50.319518   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:50.349093   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:50.349119   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:52.888932   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:52.889665   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:53.241383   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:53.241454   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:53.270824   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:53.270844   26384 cri.go:87] found id: ""
	I0307 18:50:53.270851   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:53.270903   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.274602   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:53.274642   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:53.307455   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:53.307483   26384 cri.go:87] found id: ""
	I0307 18:50:53.307492   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:53.307545   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.311591   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:53.311651   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:53.339718   26384 cri.go:87] found id: ""
	I0307 18:50:53.339742   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.339751   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:53.339758   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:53.339811   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:53.369697   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:53.369729   26384 cri.go:87] found id: ""
	I0307 18:50:53.369739   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:53.369781   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.373719   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:53.373782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:53.401736   26384 cri.go:87] found id: ""
	I0307 18:50:53.401754   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.401760   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:53.401764   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:53.401823   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:53.432212   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:53.432236   26384 cri.go:87] found id: ""
	I0307 18:50:53.432244   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:53.432301   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.436390   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:53.436449   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:53.465471   26384 cri.go:87] found id: ""
	I0307 18:50:53.465500   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.465518   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:53.465525   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:53.465583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:53.493404   26384 cri.go:87] found id: ""
	I0307 18:50:53.493431   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.493440   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:53.493455   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:53.493468   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:53.556791   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:53.556823   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:53.568973   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:53.568992   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:53.621325   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:53.621345   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:53.621356   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:53.662717   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:53.662744   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:53.693831   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:53.693855   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:53.731078   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:53.731104   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:53.759392   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:53.759416   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:53.827438   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:53.827472   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:56.380799   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:56.381488   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:56.740948   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:56.741023   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:56.777942   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:56.777966   26384 cri.go:87] found id: ""
	I0307 18:50:56.777977   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:56.778023   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.782180   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:56.782230   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:56.810835   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:56.810861   26384 cri.go:87] found id: ""
	I0307 18:50:56.810870   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:56.810916   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.814853   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:56.814919   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:56.842426   26384 cri.go:87] found id: ""
	I0307 18:50:56.842451   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.842459   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:56.842465   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:56.842517   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:56.877177   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:56.877204   26384 cri.go:87] found id: ""
	I0307 18:50:56.877212   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:56.877269   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.881405   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:56.881477   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:56.913559   26384 cri.go:87] found id: ""
	I0307 18:50:56.913584   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.913594   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:56.913602   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:56.913659   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:56.941955   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:56.941979   26384 cri.go:87] found id: ""
	I0307 18:50:56.941987   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:56.942045   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.946194   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:56.946260   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:56.978326   26384 cri.go:87] found id: ""
	I0307 18:50:56.978349   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.978355   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:56.978361   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:56.978420   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:57.007950   26384 cri.go:87] found id: ""
	I0307 18:50:57.007973   26384 logs.go:277] 0 containers: []
	W0307 18:50:57.007979   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:57.007990   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:57.008004   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:57.079815   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:57.079853   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:57.120095   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:57.120125   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:57.180846   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:57.180881   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:57.193148   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:57.193171   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:57.246199   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:57.246224   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:57.246238   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:57.299491   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:57.299528   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:57.335019   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:57.335052   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:57.363632   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:57.363662   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:59.901204   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:59.901827   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:00.241273   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:00.241359   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:00.271191   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:00.271210   26384 cri.go:87] found id: ""
	I0307 18:51:00.271217   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:51:00.271260   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.276060   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:00.276095   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:00.313616   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:00.313635   26384 cri.go:87] found id: ""
	I0307 18:51:00.313642   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:00.313691   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.317695   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:00.317746   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:00.354185   26384 cri.go:87] found id: ""
	I0307 18:51:00.354202   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.354210   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:00.354217   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:00.354272   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:00.388615   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:00.388637   26384 cri.go:87] found id: ""
	I0307 18:51:00.388646   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:00.388708   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.392706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:00.392764   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:00.419909   26384 cri.go:87] found id: ""
	I0307 18:51:00.419930   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.419937   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:00.419942   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:00.419989   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:00.448896   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:00.448921   26384 cri.go:87] found id: ""
	I0307 18:51:00.448929   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:00.448982   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.452787   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:00.452848   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:00.482963   26384 cri.go:87] found id: ""
	I0307 18:51:00.482983   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.482989   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:00.482994   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:00.483049   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:00.510864   26384 cri.go:87] found id: ""
	I0307 18:51:00.510894   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.510905   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:00.510922   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:00.510938   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:00.584622   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:00.584656   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:00.620966   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:00.620997   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:00.633989   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:00.634015   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:00.685115   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:00.685136   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:51:00.685145   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:00.722939   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:00.722971   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:00.751368   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:00.751399   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:00.814202   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:00.814234   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:00.855965   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:00.855990   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:03.406623   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:03.407166   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:03.740702   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:03.740777   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:03.774539   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:03.774560   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:03.774567   26384 cri.go:87] found id: ""
	I0307 18:51:03.774575   26384 logs.go:277] 2 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:51:03.774639   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.778696   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.782771   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:03.782817   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:03.818150   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:03.818173   26384 cri.go:87] found id: ""
	I0307 18:51:03.818182   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:03.818226   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.822385   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:03.822442   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:03.855669   26384 cri.go:87] found id: ""
	I0307 18:51:03.855697   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.855706   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:03.855713   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:03.855765   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:03.888270   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:03.888297   26384 cri.go:87] found id: ""
	I0307 18:51:03.888304   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:03.888346   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.892269   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:03.892332   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:03.920187   26384 cri.go:87] found id: ""
	I0307 18:51:03.920221   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.920232   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:03.920239   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:03.920296   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:03.953587   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:03.953613   26384 cri.go:87] found id: ""
	I0307 18:51:03.953620   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:03.953664   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.957799   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:03.957864   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:03.990134   26384 cri.go:87] found id: ""
	I0307 18:51:03.990163   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.990173   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:03.990180   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:03.990252   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:04.027162   26384 cri.go:87] found id: ""
	I0307 18:51:04.027193   26384 logs.go:277] 0 containers: []
	W0307 18:51:04.027203   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:04.027222   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:51:04.027242   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:04.067517   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:04.067549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:04.149401   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:04.149431   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:04.193745   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:04.193773   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:04.255156   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:04.255194   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:04.273611   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:04.273640   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:51:25.368122   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.094454524s)
	W0307 18:51:25.368169   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:25.368184   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:25.368198   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:25.400867   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:25.400894   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:25.431796   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:25.431828   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:25.487683   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:25.487715   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:28.026074   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:28.026610   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:28.241444   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:28.241526   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:28.274761   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:28.274787   26384 cri.go:87] found id: ""
	I0307 18:51:28.274794   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:28.274855   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.279831   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:28.279890   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:28.313516   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:28.313534   26384 cri.go:87] found id: ""
	I0307 18:51:28.313546   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:28.313588   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.317666   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:28.317719   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:28.347101   26384 cri.go:87] found id: ""
	I0307 18:51:28.347124   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.347131   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:28.347136   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:28.347198   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:28.378300   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:28.378320   26384 cri.go:87] found id: ""
	I0307 18:51:28.378326   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:28.378377   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.382695   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:28.382753   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:28.410959   26384 cri.go:87] found id: ""
	I0307 18:51:28.410981   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.410988   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:28.410995   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:28.411048   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:28.441806   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:28.441826   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:28.441833   26384 cri.go:87] found id: ""
	I0307 18:51:28.441842   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:28.441892   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.446211   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.450221   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:28.450282   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:28.483257   26384 cri.go:87] found id: ""
	I0307 18:51:28.483279   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.483286   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:28.483292   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:28.483358   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:28.510972   26384 cri.go:87] found id: ""
	I0307 18:51:28.510998   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.511008   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:28.511026   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:28.511044   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:28.524745   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:28.524776   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:28.578288   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:28.578311   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:28.578323   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:28.611345   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:28.611382   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:28.683142   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:28.683180   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:28.713237   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:28.713266   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:28.751528   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:28.751554   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:28.789824   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:28.789849   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:28.849258   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:28.849288   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:28.881741   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:28.881766   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:31.435018   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:31.435708   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:31.741199   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:31.741275   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:31.775567   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:31.775595   26384 cri.go:87] found id: ""
	I0307 18:51:31.775603   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:31.775660   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.779786   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:31.779843   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:31.811197   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:31.811217   26384 cri.go:87] found id: ""
	I0307 18:51:31.811225   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:31.811279   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.815320   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:31.815380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:31.844870   26384 cri.go:87] found id: ""
	I0307 18:51:31.844898   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.844907   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:31.844915   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:31.844992   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:31.872742   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:31.872765   26384 cri.go:87] found id: ""
	I0307 18:51:31.872779   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:31.872834   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.876867   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:31.876935   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:31.903271   26384 cri.go:87] found id: ""
	I0307 18:51:31.903299   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.903306   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:31.903311   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:31.903361   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:31.930122   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:31.930143   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:31.930147   26384 cri.go:87] found id: ""
	I0307 18:51:31.930153   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:31.930194   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.933837   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.937392   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:31.937451   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:31.963795   26384 cri.go:87] found id: ""
	I0307 18:51:31.963818   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.963824   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:31.963830   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:31.963871   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:31.997078   26384 cri.go:87] found id: ""
	I0307 18:51:31.997101   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.997107   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:31.997119   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:31.997133   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:32.085403   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:32.085436   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:32.115532   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:32.115557   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:32.171653   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:32.171688   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:32.204332   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:32.204361   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:32.216172   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:32.216197   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:32.266551   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:32.266575   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:32.266593   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:32.297132   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:32.297159   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:32.344077   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:32.344105   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:32.403948   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:32.403977   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:34.935152   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:34.935872   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:35.241335   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:35.241407   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:35.270388   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:35.270412   26384 cri.go:87] found id: ""
	I0307 18:51:35.270418   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:35.270468   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.275051   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:35.275114   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:35.304925   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:35.304971   26384 cri.go:87] found id: ""
	I0307 18:51:35.304979   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:35.305030   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.308987   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:35.309043   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:35.334992   26384 cri.go:87] found id: ""
	I0307 18:51:35.335015   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.335024   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:35.335031   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:35.335078   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:35.363029   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:35.363054   26384 cri.go:87] found id: ""
	I0307 18:51:35.363062   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:35.363112   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.366976   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:35.367027   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:35.393011   26384 cri.go:87] found id: ""
	I0307 18:51:35.393033   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.393040   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:35.393046   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:35.393089   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:35.418706   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:35.418731   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:35.418738   26384 cri.go:87] found id: ""
	I0307 18:51:35.418746   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:35.418795   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.422711   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.426344   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:35.426404   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:35.453517   26384 cri.go:87] found id: ""
	I0307 18:51:35.453540   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.453547   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:35.453552   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:35.453600   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:35.480473   26384 cri.go:87] found id: ""
	I0307 18:51:35.480506   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.480535   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:35.480557   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:35.480572   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:35.514397   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:35.514430   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:35.553507   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:35.553543   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:35.594291   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:35.594323   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:35.649916   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:35.649950   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:35.708932   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:35.708962   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:35.720655   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:35.720682   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:35.775147   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:35.775170   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:35.775185   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:35.808353   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:35.808378   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:35.888351   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:35.888387   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:38.421085   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:38.421679   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:38.741179   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:38.741264   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:38.771512   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:38.771541   26384 cri.go:87] found id: ""
	I0307 18:51:38.771552   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:38.771608   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.775448   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:38.775518   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:38.803713   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:38.803738   26384 cri.go:87] found id: ""
	I0307 18:51:38.803746   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:38.803797   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.807432   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:38.807485   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:38.841539   26384 cri.go:87] found id: ""
	I0307 18:51:38.841564   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.841572   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:38.841580   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:38.841700   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:38.873163   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:38.873189   26384 cri.go:87] found id: ""
	I0307 18:51:38.873197   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:38.873244   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.876827   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:38.876887   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:38.904500   26384 cri.go:87] found id: ""
	I0307 18:51:38.904525   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.904535   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:38.904541   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:38.904605   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:38.933684   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:38.933703   26384 cri.go:87] found id: ""
	I0307 18:51:38.933708   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:38.933753   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.937611   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:38.937673   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:38.967298   26384 cri.go:87] found id: ""
	I0307 18:51:38.967317   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.967323   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:38.967329   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:38.967381   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:38.994836   26384 cri.go:87] found id: ""
	I0307 18:51:38.994857   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.994864   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:38.994875   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:38.994885   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:39.013172   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:39.013202   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:39.050550   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:39.050577   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:39.081654   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:39.081686   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:39.122178   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:39.122206   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:39.157534   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:39.157558   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:39.215607   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:39.215638   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:39.270533   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:39.270555   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:39.270565   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:39.351014   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:39.351046   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:41.910810   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:41.911444   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:42.240866   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:42.240934   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:42.270659   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:42.270686   26384 cri.go:87] found id: ""
	I0307 18:51:42.270693   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:42.270744   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.274956   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:42.275009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:42.302640   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:42.302659   26384 cri.go:87] found id: ""
	I0307 18:51:42.302666   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:42.302708   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.306628   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:42.306683   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:42.333725   26384 cri.go:87] found id: ""
	I0307 18:51:42.333744   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.333750   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:42.333757   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:42.333797   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:42.361433   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:42.361455   26384 cri.go:87] found id: ""
	I0307 18:51:42.361461   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:42.361525   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.365419   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:42.365475   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:42.390359   26384 cri.go:87] found id: ""
	I0307 18:51:42.390386   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.390394   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:42.390400   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:42.390466   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:42.418877   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:42.418900   26384 cri.go:87] found id: ""
	I0307 18:51:42.418909   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:42.418961   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.422852   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:42.422922   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:42.449901   26384 cri.go:87] found id: ""
	I0307 18:51:42.449937   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.449947   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:42.449953   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:42.450013   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:42.478218   26384 cri.go:87] found id: ""
	I0307 18:51:42.478243   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.478251   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:42.478269   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:42.478286   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:42.506655   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:42.506700   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:42.582409   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:42.582444   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:42.615907   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:42.615931   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:42.657529   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:42.657560   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:42.712843   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:42.712871   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:42.745993   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:42.746017   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:42.808149   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:42.808182   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:42.820414   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:42.820435   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:42.873183   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:45.374057   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:45.374585   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:45.741047   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:45.741134   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:45.770908   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:45.770936   26384 cri.go:87] found id: ""
	I0307 18:51:45.770944   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:45.771001   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.775199   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:45.775271   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:45.804540   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:45.804560   26384 cri.go:87] found id: ""
	I0307 18:51:45.804567   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:45.804609   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.808609   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:45.808686   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:45.835602   26384 cri.go:87] found id: ""
	I0307 18:51:45.835627   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.835635   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:45.835643   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:45.835702   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:45.868007   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:45.868029   26384 cri.go:87] found id: ""
	I0307 18:51:45.868038   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:45.868098   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.872229   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:45.872288   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:45.900275   26384 cri.go:87] found id: ""
	I0307 18:51:45.900301   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.900310   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:45.900317   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:45.900380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:45.928163   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:45.928182   26384 cri.go:87] found id: ""
	I0307 18:51:45.928189   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:45.928248   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.932473   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:45.932532   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:45.961937   26384 cri.go:87] found id: ""
	I0307 18:51:45.961971   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.961982   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:45.961990   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:45.962041   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:45.991124   26384 cri.go:87] found id: ""
	I0307 18:51:45.991158   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.991165   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:45.991178   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:45.991195   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:46.055916   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:46.055947   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:46.069670   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:46.069697   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:46.123987   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:46.124010   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:46.124024   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:46.158206   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:46.158235   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:46.234157   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:46.234188   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:46.277028   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:46.277054   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:46.331295   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:46.331325   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:46.369056   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:46.369081   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:48.902692   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:48.903509   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:49.240949   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:49.241016   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:49.270709   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:49.270735   26384 cri.go:87] found id: ""
	I0307 18:51:49.270744   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:49.270804   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.274731   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:49.274789   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:49.302081   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:49.302100   26384 cri.go:87] found id: ""
	I0307 18:51:49.302108   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:49.302166   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.306174   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:49.306234   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:49.333438   26384 cri.go:87] found id: ""
	I0307 18:51:49.333461   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.333468   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:49.333474   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:49.333527   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:49.365533   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:49.365562   26384 cri.go:87] found id: ""
	I0307 18:51:49.365569   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:49.365610   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.369216   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:49.369276   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:49.398301   26384 cri.go:87] found id: ""
	I0307 18:51:49.398326   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.398334   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:49.398341   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:49.398398   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:49.427703   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:49.427722   26384 cri.go:87] found id: ""
	I0307 18:51:49.427730   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:49.427774   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.431651   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:49.431702   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:49.462642   26384 cri.go:87] found id: ""
	I0307 18:51:49.462667   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.462674   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:49.462679   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:49.462729   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:49.489078   26384 cri.go:87] found id: ""
	I0307 18:51:49.489106   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.489116   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:49.489129   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:49.489140   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:49.518966   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:49.518994   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:49.578313   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:49.578343   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:49.632259   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:49.632280   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:49.632292   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:49.665772   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:49.665797   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:49.745503   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:49.745534   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:49.785793   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:49.785819   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:49.821781   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:49.821843   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:49.888865   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:49.888906   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:52.403328   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:52.403890   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:52.741393   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:52.741477   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:52.770492   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:52.770514   26384 cri.go:87] found id: ""
	I0307 18:51:52.770520   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:52.770575   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.774281   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:52.774334   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:52.804403   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:52.804427   26384 cri.go:87] found id: ""
	I0307 18:51:52.804435   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:52.804480   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.808178   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:52.808226   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:52.836026   26384 cri.go:87] found id: ""
	I0307 18:51:52.836048   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.836055   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:52.836060   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:52.836118   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:52.867795   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:52.867824   26384 cri.go:87] found id: ""
	I0307 18:51:52.867834   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:52.867891   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.871532   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:52.871602   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:52.899536   26384 cri.go:87] found id: ""
	I0307 18:51:52.899558   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.899565   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:52.899570   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:52.899631   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:52.927081   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:52.927105   26384 cri.go:87] found id: ""
	I0307 18:51:52.927114   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:52.927170   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.930990   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:52.931056   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:52.961939   26384 cri.go:87] found id: ""
	I0307 18:51:52.961965   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.961973   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:52.961978   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:52.962025   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:52.990556   26384 cri.go:87] found id: ""
	I0307 18:51:52.990582   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.990589   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:52.990602   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:52.990611   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:53.055863   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:53.055899   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:53.118674   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:53.118699   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:53.118712   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:53.160200   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:53.160226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:53.193132   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:53.193157   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:53.206488   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:53.206521   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:53.239547   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:53.239575   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:53.271150   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:53.271179   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:53.355907   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:53.355937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:55.915778   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:55.916343   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:56.240741   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:56.240815   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:56.276584   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:56.276609   26384 cri.go:87] found id: ""
	I0307 18:51:56.276616   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:56.276662   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.280478   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:56.280543   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:56.310551   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:56.310580   26384 cri.go:87] found id: ""
	I0307 18:51:56.310591   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:56.310652   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.314325   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:56.314380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:56.345523   26384 cri.go:87] found id: ""
	I0307 18:51:56.345545   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.345555   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:56.345562   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:56.345613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:56.374295   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:56.374316   26384 cri.go:87] found id: ""
	I0307 18:51:56.374325   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:56.374369   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.377845   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:56.377893   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:56.407290   26384 cri.go:87] found id: ""
	I0307 18:51:56.407314   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.407323   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:56.407330   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:56.407387   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:56.434800   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:56.434822   26384 cri.go:87] found id: ""
	I0307 18:51:56.434831   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:56.434889   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.438706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:56.438771   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:56.469291   26384 cri.go:87] found id: ""
	I0307 18:51:56.469321   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.469331   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:56.469338   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:56.469400   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:56.496682   26384 cri.go:87] found id: ""
	I0307 18:51:56.496707   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.496716   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:56.496731   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:56.496749   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:56.558292   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:56.558324   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:56.616546   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:56.616566   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:56.616576   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:56.645444   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:56.645482   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:56.690522   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:56.690549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:56.729452   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:56.729480   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:56.741227   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:56.741250   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:56.774040   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:56.774069   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:56.851946   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:56.851980   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:59.410226   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:59.410809   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:59.741513   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:59.741583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:59.770692   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:59.770715   26384 cri.go:87] found id: ""
	I0307 18:51:59.770723   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:59.770773   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.774597   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:59.774652   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:59.802266   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:59.802286   26384 cri.go:87] found id: ""
	I0307 18:51:59.802293   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:59.802330   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.805853   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:59.805892   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:59.833448   26384 cri.go:87] found id: ""
	I0307 18:51:59.833466   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.833473   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:59.833477   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:59.833517   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:59.864701   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:59.864723   26384 cri.go:87] found id: ""
	I0307 18:51:59.864732   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:59.864787   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.868622   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:59.868687   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:59.900470   26384 cri.go:87] found id: ""
	I0307 18:51:59.900500   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.900510   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:59.900518   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:59.900573   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:59.927551   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:59.927580   26384 cri.go:87] found id: ""
	I0307 18:51:59.927588   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:59.927633   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.931339   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:59.931393   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:59.959403   26384 cri.go:87] found id: ""
	I0307 18:51:59.959426   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.959436   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:59.959442   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:59.959484   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:59.987595   26384 cri.go:87] found id: ""
	I0307 18:51:59.987616   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.987623   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:59.987637   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:59.987654   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:00.035743   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:00.035772   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:00.099440   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:00.099473   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:00.131520   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:00.131549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:00.208993   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:00.209030   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:00.267588   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:00.267622   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:00.301447   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:00.301476   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:00.313284   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:00.313307   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:00.368862   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:00.368881   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:00.368892   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:02.901502   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:02.902198   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:03.240812   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:03.240884   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:03.271596   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:03.271623   26384 cri.go:87] found id: ""
	I0307 18:52:03.271632   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:03.271693   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.276075   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:03.276140   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:03.306294   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:03.306321   26384 cri.go:87] found id: ""
	I0307 18:52:03.306329   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:03.306372   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.310127   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:03.310195   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:03.346928   26384 cri.go:87] found id: ""
	I0307 18:52:03.346956   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.346964   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:03.346970   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:03.347028   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:03.373901   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:03.373935   26384 cri.go:87] found id: ""
	I0307 18:52:03.373944   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:03.374004   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.377726   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:03.377816   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:03.408820   26384 cri.go:87] found id: ""
	I0307 18:52:03.408855   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.408862   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:03.408880   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:03.408938   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:03.437027   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:03.437049   26384 cri.go:87] found id: ""
	I0307 18:52:03.437060   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:03.437104   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.440989   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:03.441047   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:03.470590   26384 cri.go:87] found id: ""
	I0307 18:52:03.470614   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.470621   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:03.470627   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:03.470688   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:03.500217   26384 cri.go:87] found id: ""
	I0307 18:52:03.500244   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.500252   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:03.500267   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:03.500280   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:03.566239   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:03.566268   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:03.625165   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:03.625184   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:03.625195   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:03.682195   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:03.682226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:03.719700   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:03.719727   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:03.731216   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:03.731240   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:03.763196   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:03.763229   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:03.791661   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:03.791686   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:03.868166   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:03.868202   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:06.409727   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:06.410322   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:06.740737   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:06.740806   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:06.771108   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:06.771137   26384 cri.go:87] found id: ""
	I0307 18:52:06.771144   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:06.771189   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.775193   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:06.775250   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:06.806716   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:06.806737   26384 cri.go:87] found id: ""
	I0307 18:52:06.806746   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:06.806795   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.810459   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:06.810504   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:06.837774   26384 cri.go:87] found id: ""
	I0307 18:52:06.837797   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.837804   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:06.837809   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:06.837860   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:06.866218   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:06.866239   26384 cri.go:87] found id: ""
	I0307 18:52:06.866249   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:06.866303   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.869982   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:06.870039   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:06.899518   26384 cri.go:87] found id: ""
	I0307 18:52:06.899546   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.899556   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:06.899562   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:06.899617   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:06.927743   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:06.927770   26384 cri.go:87] found id: ""
	I0307 18:52:06.927778   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:06.927820   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.931549   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:06.931613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:06.961419   26384 cri.go:87] found id: ""
	I0307 18:52:06.961445   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.961452   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:06.961457   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:06.961518   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:06.989502   26384 cri.go:87] found id: ""
	I0307 18:52:06.989526   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.989532   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:06.989546   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:06.989559   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:07.025827   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:07.025850   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:07.086485   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:07.086512   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:07.098772   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:07.098799   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:07.130198   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:07.130225   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:07.212261   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:07.212293   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:07.268115   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:07.268148   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:07.330511   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:07.330537   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:07.330549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:07.362299   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:07.362331   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:09.904436   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:09.905035   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:10.241493   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:10.241591   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:10.270226   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:10.270250   26384 cri.go:87] found id: ""
	I0307 18:52:10.270259   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:10.270316   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.274003   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:10.274065   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:10.301912   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:10.301935   26384 cri.go:87] found id: ""
	I0307 18:52:10.301943   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:10.301995   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.305750   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:10.305809   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:10.333329   26384 cri.go:87] found id: ""
	I0307 18:52:10.333347   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.333356   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:10.333364   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:10.333415   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:10.365807   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:10.365830   26384 cri.go:87] found id: ""
	I0307 18:52:10.365837   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:10.365876   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.369503   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:10.369555   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:10.402354   26384 cri.go:87] found id: ""
	I0307 18:52:10.402382   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.402391   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:10.402398   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:10.402458   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:10.431242   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:10.431268   26384 cri.go:87] found id: ""
	I0307 18:52:10.431278   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:10.431331   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.435085   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:10.435150   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:10.462020   26384 cri.go:87] found id: ""
	I0307 18:52:10.462044   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.462053   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:10.462059   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:10.462117   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:10.492729   26384 cri.go:87] found id: ""
	I0307 18:52:10.492755   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.492761   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:10.492776   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:10.492788   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:10.550753   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:10.550787   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:10.587328   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:10.587353   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:10.649658   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:10.649690   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:10.688111   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:10.688141   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:10.715243   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:10.715271   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:10.794097   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:10.794129   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:10.806313   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:10.806337   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:10.859925   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:10.859948   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:10.859957   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:13.412753   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:13.413326   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:13.740752   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:13.740822   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:13.769106   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:13.769130   26384 cri.go:87] found id: ""
	I0307 18:52:13.769139   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:13.769197   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.772932   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:13.772977   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:13.799190   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:13.799214   26384 cri.go:87] found id: ""
	I0307 18:52:13.799224   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:13.799272   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.803163   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:13.803229   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:13.829114   26384 cri.go:87] found id: ""
	I0307 18:52:13.829137   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.829143   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:13.829148   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:13.829215   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:13.860207   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:13.860232   26384 cri.go:87] found id: ""
	I0307 18:52:13.860241   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:13.860299   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.864306   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:13.864365   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:13.895421   26384 cri.go:87] found id: ""
	I0307 18:52:13.895447   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.895456   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:13.895464   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:13.895523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:13.926222   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:13.926245   26384 cri.go:87] found id: ""
	I0307 18:52:13.926252   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:13.926301   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.930178   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:13.930235   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:13.954048   26384 cri.go:87] found id: ""
	I0307 18:52:13.954067   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.954073   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:13.954081   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:13.954137   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:13.982093   26384 cri.go:87] found id: ""
	I0307 18:52:13.982112   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.982118   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:13.982130   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:13.982143   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:14.038975   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:14.038990   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:14.039000   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:14.090619   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:14.090645   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:14.148386   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:14.148418   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:14.209750   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:14.209782   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:14.222299   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:14.222320   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:14.259738   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:14.259764   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:14.288148   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:14.288183   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:14.364866   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:14.364898   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:16.896622   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:16.897179   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:17.241681   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:17.241765   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:17.270963   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:17.270985   26384 cri.go:87] found id: ""
	I0307 18:52:17.270994   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:17.271055   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.274819   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:17.274879   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:17.303431   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:17.303455   26384 cri.go:87] found id: ""
	I0307 18:52:17.303464   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:17.303516   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.307271   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:17.307316   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:17.336969   26384 cri.go:87] found id: ""
	I0307 18:52:17.336994   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.337002   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:17.337009   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:17.337061   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:17.364451   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:17.364476   26384 cri.go:87] found id: ""
	I0307 18:52:17.364484   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:17.364543   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.368076   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:17.368130   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:17.395637   26384 cri.go:87] found id: ""
	I0307 18:52:17.395660   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.395667   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:17.395672   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:17.395715   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:17.423253   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:17.423273   26384 cri.go:87] found id: ""
	I0307 18:52:17.423279   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:17.423321   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.427005   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:17.427060   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:17.454713   26384 cri.go:87] found id: ""
	I0307 18:52:17.454731   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.454736   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:17.454742   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:17.454784   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:17.486176   26384 cri.go:87] found id: ""
	I0307 18:52:17.486199   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.486206   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:17.486219   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:17.486229   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:17.498032   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:17.498055   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:17.557073   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:17.557097   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:17.557110   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:17.594388   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:17.594418   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:17.620305   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:17.620338   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:17.702872   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:17.702904   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:17.759889   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:17.759926   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:17.817947   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:17.817980   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:17.865944   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:17.865973   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:20.398731   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:20.399378   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:20.740808   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:20.740889   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:20.774030   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:20.774056   26384 cri.go:87] found id: ""
	I0307 18:52:20.774066   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:20.774117   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.778074   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:20.778136   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:20.806773   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:20.806791   26384 cri.go:87] found id: ""
	I0307 18:52:20.806798   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:20.806846   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.810652   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:20.810700   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:20.838994   26384 cri.go:87] found id: ""
	I0307 18:52:20.839019   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.839029   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:20.839042   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:20.839102   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:20.869727   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:20.869748   26384 cri.go:87] found id: ""
	I0307 18:52:20.869756   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:20.869812   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.873736   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:20.873793   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:20.901823   26384 cri.go:87] found id: ""
	I0307 18:52:20.901844   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.901851   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:20.901857   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:20.901929   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:20.934273   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:20.934298   26384 cri.go:87] found id: ""
	I0307 18:52:20.934306   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:20.934356   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.938406   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:20.938472   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:20.969450   26384 cri.go:87] found id: ""
	I0307 18:52:20.969479   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.969486   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:20.969492   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:20.969541   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:21.001492   26384 cri.go:87] found id: ""
	I0307 18:52:21.001514   26384 logs.go:277] 0 containers: []
	W0307 18:52:21.001521   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:21.001534   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:21.001548   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:21.054970   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:21.054986   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:21.054995   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:21.088359   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:21.088383   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:21.120677   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:21.120706   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:21.182999   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:21.183047   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:21.245976   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:21.246016   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:21.346906   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:21.346937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:21.395390   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:21.395425   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:21.428290   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:21.428320   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:23.941739   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:23.942328   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:24.240694   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:24.240774   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:24.270200   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:24.270223   26384 cri.go:87] found id: ""
	I0307 18:52:24.270230   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:24.270277   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.274395   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:24.274459   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:24.305875   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:24.305898   26384 cri.go:87] found id: ""
	I0307 18:52:24.305919   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:24.305974   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.309735   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:24.309791   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:24.336466   26384 cri.go:87] found id: ""
	I0307 18:52:24.336484   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.336493   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:24.336499   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:24.336550   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:24.364312   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:24.364337   26384 cri.go:87] found id: ""
	I0307 18:52:24.364347   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:24.364398   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.368537   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:24.368610   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:24.399307   26384 cri.go:87] found id: ""
	I0307 18:52:24.399333   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.399343   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:24.399350   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:24.399410   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:24.428137   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:24.428157   26384 cri.go:87] found id: ""
	I0307 18:52:24.428165   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:24.428220   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.432114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:24.432177   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:24.458423   26384 cri.go:87] found id: ""
	I0307 18:52:24.458443   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.458452   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:24.458458   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:24.458507   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:24.486856   26384 cri.go:87] found id: ""
	I0307 18:52:24.486881   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.486889   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:24.486907   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:24.486920   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:24.568604   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:24.568635   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:24.609771   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:24.609802   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:24.665713   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:24.665734   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:24.665752   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:24.691910   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:24.691937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:24.723832   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:24.723860   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:24.764806   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:24.764833   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:24.821496   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:24.821529   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:24.880200   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:24.880230   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:27.393632   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:27.394219   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:27.741710   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:27.741782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:27.770323   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:27.770343   26384 cri.go:87] found id: ""
	I0307 18:52:27.770349   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:27.770405   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.774285   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:27.774345   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:27.800912   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:27.800933   26384 cri.go:87] found id: ""
	I0307 18:52:27.800942   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:27.800991   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.804444   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:27.804490   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:27.836265   26384 cri.go:87] found id: ""
	I0307 18:52:27.836290   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.836297   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:27.836303   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:27.836359   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:27.865231   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:27.865260   26384 cri.go:87] found id: ""
	I0307 18:52:27.865269   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:27.865317   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.869523   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:27.869586   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:27.900740   26384 cri.go:87] found id: ""
	I0307 18:52:27.900770   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.900780   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:27.900787   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:27.900849   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:27.929343   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:27.929371   26384 cri.go:87] found id: ""
	I0307 18:52:27.929381   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:27.929440   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.933280   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:27.933348   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:27.966078   26384 cri.go:87] found id: ""
	I0307 18:52:27.966104   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.966111   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:27.966119   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:27.966175   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:27.994539   26384 cri.go:87] found id: ""
	I0307 18:52:27.994562   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.994568   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:27.994581   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:27.994591   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:28.026948   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:28.026989   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:28.039179   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:28.039208   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:28.094604   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:28.094626   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:28.094637   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:28.134457   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:28.134490   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:28.190768   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:28.192394   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:28.251450   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:28.251489   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:28.285082   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:28.285108   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:28.316724   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:28.316750   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:30.901642   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:30.902211   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:31.241667   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:31.241736   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:31.271253   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:31.271279   26384 cri.go:87] found id: ""
	I0307 18:52:31.271288   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:31.271343   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.275766   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:31.275822   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:31.304092   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:31.304115   26384 cri.go:87] found id: ""
	I0307 18:52:31.304121   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:31.304161   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.307829   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:31.307887   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:31.336157   26384 cri.go:87] found id: ""
	I0307 18:52:31.336184   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.336193   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:31.336201   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:31.336266   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:31.362407   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:31.362427   26384 cri.go:87] found id: ""
	I0307 18:52:31.362433   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:31.362484   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.366267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:31.366323   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:31.392005   26384 cri.go:87] found id: ""
	I0307 18:52:31.392031   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.392040   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:31.392047   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:31.392107   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:31.417145   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:31.417164   26384 cri.go:87] found id: ""
	I0307 18:52:31.417170   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:31.417226   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.421051   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:31.421093   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:31.452946   26384 cri.go:87] found id: ""
	I0307 18:52:31.452966   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.452973   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:31.452991   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:31.453072   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:31.482025   26384 cri.go:87] found id: ""
	I0307 18:52:31.482048   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.482058   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:31.482075   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:31.482094   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:31.535162   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:31.535180   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:31.535190   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:31.575114   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:31.575149   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:31.630597   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:31.630629   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:31.689816   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:31.689854   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:31.703439   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:31.703465   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:31.733755   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:31.733789   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:31.761485   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:31.761517   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:31.849205   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:31.849238   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:34.397092   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:34.399029   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:34.740924   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:34.741012   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:34.768741   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:34.768769   26384 cri.go:87] found id: ""
	I0307 18:52:34.768776   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:34.768826   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.772560   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:34.772608   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:34.801197   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:34.801219   26384 cri.go:87] found id: ""
	I0307 18:52:34.801226   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:34.801268   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.805070   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:34.805123   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:34.841217   26384 cri.go:87] found id: ""
	I0307 18:52:34.841245   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.841258   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:34.841267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:34.841329   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:34.878585   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:34.878643   26384 cri.go:87] found id: ""
	I0307 18:52:34.878663   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:34.878720   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.882566   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:34.882625   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:34.909524   26384 cri.go:87] found id: ""
	I0307 18:52:34.909550   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.909557   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:34.909565   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:34.909613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:34.936954   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:34.936975   26384 cri.go:87] found id: ""
	I0307 18:52:34.936983   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:34.937053   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.941502   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:34.941564   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:34.971973   26384 cri.go:87] found id: ""
	I0307 18:52:34.971995   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.972004   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:34.972011   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:34.972070   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:35.003175   26384 cri.go:87] found id: ""
	I0307 18:52:35.003199   26384 logs.go:277] 0 containers: []
	W0307 18:52:35.003206   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:35.003221   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:35.003233   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:35.057263   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:35.057287   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:35.057300   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:35.093840   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:35.093865   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:35.131551   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:35.131580   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:35.213034   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:35.213066   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:35.250410   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:35.250442   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:35.305928   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:35.305959   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:35.366041   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:35.366074   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:35.411044   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:35.411068   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:37.924460   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:37.925115   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:38.240997   26384 kubeadm.go:637] restartCluster took 4m28.730822487s
	W0307 18:52:38.241143   26384 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	I0307 18:52:38.241176   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0307 18:52:39.540779   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.299584283s)
	I0307 18:52:39.540844   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:52:39.554353   26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:52:39.563539   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:52:39.572536   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:52:39.572574   26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 18:52:39.609552   26384 kubeadm.go:322] W0307 18:52:39.601196    5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0307 18:52:39.746961   26384 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:56:41.125984   26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0307 18:56:41.126127   26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0307 18:56:41.127655   26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
	I0307 18:56:41.127696   26384 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 18:56:41.127765   26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:56:41.127875   26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:56:41.127983   26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:56:41.128061   26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:56:41.130326   26384 out.go:204]   - Generating certificates and keys ...
	I0307 18:56:41.130393   26384 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 18:56:41.130451   26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 18:56:41.130531   26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 18:56:41.130620   26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0307 18:56:41.130718   26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 18:56:41.130787   26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0307 18:56:41.130866   26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0307 18:56:41.130953   26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0307 18:56:41.131049   26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 18:56:41.131155   26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 18:56:41.131217   26384 kubeadm.go:322] [certs] Using the existing "sa" key
	I0307 18:56:41.131292   26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:56:41.131363   26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:56:41.131434   26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:56:41.131523   26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:56:41.131603   26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:56:41.131688   26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:56:41.131762   26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:56:41.131795   26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 18:56:41.131852   26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:56:41.133514   26384 out.go:204]   - Booting up control plane ...
	I0307 18:56:41.133618   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:56:41.133699   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:56:41.133776   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:56:41.133863   26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:56:41.134051   26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:56:41.134110   26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0307 18:56:41.134119   26384 kubeadm.go:322] 
	I0307 18:56:41.134162   26384 kubeadm.go:322] Unfortunately, an error has occurred:
	I0307 18:56:41.134218   26384 kubeadm.go:322] 	timed out waiting for the condition
	I0307 18:56:41.134224   26384 kubeadm.go:322] 
	I0307 18:56:41.134270   26384 kubeadm.go:322] This error is likely caused by:
	I0307 18:56:41.134347   26384 kubeadm.go:322] 	- The kubelet is not running
	I0307 18:56:41.134504   26384 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0307 18:56:41.134517   26384 kubeadm.go:322] 
	I0307 18:56:41.134650   26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0307 18:56:41.134698   26384 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0307 18:56:41.134741   26384 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0307 18:56:41.134760   26384 kubeadm.go:322] 
	I0307 18:56:41.134863   26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0307 18:56:41.134935   26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0307 18:56:41.135037   26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0307 18:56:41.135174   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0307 18:56:41.135274   26384 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0307 18:56:41.135447   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0307 18:56:41.135604   26384 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:52:39.601196    5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:52:39.601196    5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0307 18:56:41.135655   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0307 18:56:42.416834   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.281155319s)
	I0307 18:56:42.416897   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:56:42.431050   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:56:42.440667   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:56:42.440700   26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 18:56:42.477411   26384 kubeadm.go:322] W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0307 18:56:42.627046   26384 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:00:43.649484   26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0307 19:00:43.649599   26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0307 19:00:43.651218   26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
	I0307 19:00:43.651271   26384 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 19:00:43.651420   26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:00:43.651548   26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:00:43.651725   26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:00:43.651796   26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:00:43.654219   26384 out.go:204]   - Generating certificates and keys ...
	I0307 19:00:43.654288   26384 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 19:00:43.654338   26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 19:00:43.654403   26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:00:43.654458   26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:00:43.654514   26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:00:43.654563   26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0307 19:00:43.654618   26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:00:43.654668   26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:00:43.654730   26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:00:43.654798   26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:00:43.654859   26384 kubeadm.go:322] [certs] Using the existing "sa" key
	I0307 19:00:43.654935   26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:00:43.654978   26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:00:43.655070   26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:00:43.655168   26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:00:43.655220   26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:00:43.655347   26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:00:43.655430   26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:00:43.655465   26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 19:00:43.655523   26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:00:43.657162   26384 out.go:204]   - Booting up control plane ...
	I0307 19:00:43.657245   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:00:43.657351   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:00:43.657442   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:00:43.657533   26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:00:43.657658   26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:00:43.657699   26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0307 19:00:43.657705   26384 kubeadm.go:322] 
	I0307 19:00:43.657736   26384 kubeadm.go:322] Unfortunately, an error has occurred:
	I0307 19:00:43.657782   26384 kubeadm.go:322] 	timed out waiting for the condition
	I0307 19:00:43.657789   26384 kubeadm.go:322] 
	I0307 19:00:43.657829   26384 kubeadm.go:322] This error is likely caused by:
	I0307 19:00:43.657862   26384 kubeadm.go:322] 	- The kubelet is not running
	I0307 19:00:43.657966   26384 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0307 19:00:43.657977   26384 kubeadm.go:322] 
	I0307 19:00:43.658062   26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0307 19:00:43.658091   26384 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0307 19:00:43.658134   26384 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0307 19:00:43.658142   26384 kubeadm.go:322] 
	I0307 19:00:43.658255   26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0307 19:00:43.658393   26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0307 19:00:43.658480   26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0307 19:00:43.658603   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0307 19:00:43.658702   26384 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0307 19:00:43.658828   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0307 19:00:43.658871   26384 kubeadm.go:403] StartCluster complete in 12m34.187466467s
	I0307 19:00:43.658927   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:00:43.658974   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:00:43.701064   26384 cri.go:87] found id: "4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
	I0307 19:00:43.701086   26384 cri.go:87] found id: ""
	I0307 19:00:43.701098   26384 logs.go:277] 1 containers: [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc]
	I0307 19:00:43.701142   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.705362   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:00:43.705417   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:00:43.734452   26384 cri.go:87] found id: "c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
	I0307 19:00:43.734469   26384 cri.go:87] found id: ""
	I0307 19:00:43.734476   26384 logs.go:277] 1 containers: [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56]
	I0307 19:00:43.734531   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.739954   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:00:43.740015   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:00:43.766381   26384 cri.go:87] found id: ""
	I0307 19:00:43.766402   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.766408   26384 logs.go:279] No container was found matching "coredns"
	I0307 19:00:43.766413   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:00:43.766453   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:00:43.796840   26384 cri.go:87] found id: "1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
	I0307 19:00:43.796867   26384 cri.go:87] found id: ""
	I0307 19:00:43.796875   26384 logs.go:277] 1 containers: [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41]
	I0307 19:00:43.796929   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.801100   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:00:43.801154   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:00:43.830552   26384 cri.go:87] found id: ""
	I0307 19:00:43.830577   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.830584   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 19:00:43.830589   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:00:43.830637   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:00:43.867303   26384 cri.go:87] found id: "8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
	I0307 19:00:43.867324   26384 cri.go:87] found id: ""
	I0307 19:00:43.867331   26384 logs.go:277] 1 containers: [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06]
	I0307 19:00:43.867370   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.871114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:00:43.871164   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:00:43.904677   26384 cri.go:87] found id: ""
	I0307 19:00:43.904703   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.904709   26384 logs.go:279] No container was found matching "kindnet"
	I0307 19:00:43.904715   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:00:43.904758   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:00:43.944324   26384 cri.go:87] found id: ""
	I0307 19:00:43.944349   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.944359   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 19:00:43.944378   26384 logs.go:123] Gathering logs for containerd ...
	I0307 19:00:43.944395   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:00:44.011972   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 19:00:44.012003   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:00:44.077224   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 19:00:44.077258   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:00:44.091281   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:00:44.091305   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 19:00:44.158036   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 19:00:44.158054   26384 logs.go:123] Gathering logs for etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] ...
	I0307 19:00:44.158065   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
	I0307 19:00:44.193518   26384 logs.go:123] Gathering logs for kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] ...
	I0307 19:00:44.193546   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
	I0307 19:00:44.281107   26384 logs.go:123] Gathering logs for kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] ...
	I0307 19:00:44.281138   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
	I0307 19:00:44.321328   26384 logs.go:123] Gathering logs for kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] ...
	I0307 19:00:44.321353   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
	I0307 19:00:44.370028   26384 logs.go:123] Gathering logs for container status ...
	I0307 19:00:44.370058   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0307 19:00:44.410088   26384 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0307 19:00:44.410135   26384 out.go:239] * 
	* 
	W0307 19:00:44.410302   26384 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0307 19:00:44.410323   26384 out.go:239] * 
	* 
	W0307 19:00:44.411225   26384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:00:44.414682   26384 out.go:177] 
	W0307 19:00:44.416349   26384 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0307 19:00:44.416447   26384 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0307 19:00:44.416516   26384 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0307 19:00:44.419274   26384 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:73: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd failed: exit status 109
panic.go:522: *** TestPreload FAILED at 2023-03-07 19:00:44.71260395 +0000 UTC m=+3530.150347056
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-203208 -n test-preload-203208
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-203208 -n test-preload-203208: exit status 2 (226.296039ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-203208 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt                       | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | multinode-373242:/home/docker/cp-test_multinode-373242-m03_multinode-373242.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-373242 ssh -n                                                                 | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | multinode-373242-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-373242 ssh -n multinode-373242 sudo cat                                       | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | /home/docker/cp-test_multinode-373242-m03_multinode-373242.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt                       | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | multinode-373242-m02:/home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-373242 ssh -n                                                                 | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | multinode-373242-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-373242 ssh -n multinode-373242-m02 sudo cat                                   | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	|         | /home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-373242 node stop m03                                                          | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
	| node    | multinode-373242 node start                                                             | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:26 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-373242                                                                | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:26 UTC |                     |
	| stop    | -p multinode-373242                                                                     | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:26 UTC | 07 Mar 23 18:29 UTC |
	| start   | -p multinode-373242                                                                     | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:29 UTC | 07 Mar 23 18:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-373242                                                                | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC |                     |
	| node    | multinode-373242 node delete                                                            | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC | 07 Mar 23 18:35 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-373242 stop                                                                   | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC | 07 Mar 23 18:38 UTC |
	| start   | -p multinode-373242                                                                     | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:38 UTC | 07 Mar 23 18:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-373242                                                                | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC |                     |
	| start   | -p multinode-373242-m02                                                                 | multinode-373242-m02 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-373242-m03                                                                 | multinode-373242-m03 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | 07 Mar 23 18:43 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-373242                                                                 | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC |                     |
	| delete  | -p multinode-373242-m03                                                                 | multinode-373242-m03 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
	| delete  | -p multinode-373242                                                                     | multinode-373242     | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
	| start   | -p test-preload-203208                                                                  | test-preload-203208  | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-203208                                                                  | test-preload-203208  | jenkins | v1.29.0 | 07 Mar 23 18:45 UTC | 07 Mar 23 18:45 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-203208                                                                  | test-preload-203208  | jenkins | v1.29.0 | 07 Mar 23 18:45 UTC | 07 Mar 23 18:47 UTC |
	| start   | -p test-preload-203208                                                                  | test-preload-203208  | jenkins | v1.29.0 | 07 Mar 23 18:47 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:47:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:47:08.188999   26384 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:47:08.189163   26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:47:08.189221   26384 out.go:309] Setting ErrFile to fd 2...
	I0307 18:47:08.189235   26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:47:08.189633   26384 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:47:08.190229   26384 out.go:303] Setting JSON to false
	I0307 18:47:08.191033   26384 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5376,"bootTime":1678209452,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:47:08.191096   26384 start.go:135] virtualization: kvm guest
	I0307 18:47:08.193540   26384 out.go:177] * [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:47:08.195219   26384 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:47:08.196770   26384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:47:08.195178   26384 notify.go:220] Checking for updates...
	I0307 18:47:08.198392   26384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:47:08.199832   26384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:47:08.201253   26384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:47:08.202663   26384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:47:08.204748   26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0307 18:47:08.205285   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:08.205342   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:08.220069   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0307 18:47:08.220563   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:08.221076   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:08.221096   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:08.221432   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:08.221584   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:08.223753   26384 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	I0307 18:47:08.225235   26384 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:47:08.225524   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:08.225572   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:08.239705   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0307 18:47:08.240091   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:08.240557   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:08.240573   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:08.240906   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:08.241120   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:08.275331   26384 out.go:177] * Using the kvm2 driver based on existing profile
	I0307 18:47:08.276690   26384 start.go:296] selected driver: kvm2
	I0307 18:47:08.276702   26384 start.go:857] validating driver "kvm2" against &{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:47:08.276795   26384 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:47:08.277360   26384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:47:08.277421   26384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0307 18:47:08.291366   26384 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0307 18:47:08.291664   26384 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:47:08.291694   26384 cni.go:84] Creating CNI manager for ""
	I0307 18:47:08.291705   26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:47:08.291717   26384 start_flags.go:319] config:
	{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:47:08.291838   26384 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:47:08.293852   26384 out.go:177] * Starting control plane node test-preload-203208 in cluster test-preload-203208
	I0307 18:47:08.296143   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:08.450857   26384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0307 18:47:08.450906   26384 cache.go:57] Caching tarball of preloaded images
	I0307 18:47:08.451048   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:08.453213   26384 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0307 18:47:08.454642   26384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:08.614514   26384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0307 18:47:32.826448   26384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:32.826536   26384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:47:33.690125   26384 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on containerd
	I0307 18:47:33.690264   26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
	I0307 18:47:33.690465   26384 cache.go:193] Successfully downloaded all kic artifacts
	I0307 18:47:33.690499   26384 start.go:364] acquiring machines lock for test-preload-203208: {Name:mk86d1042b74b1a783c77f2a2445172eb6d30958 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 18:47:33.690551   26384 start.go:368] acquired machines lock for "test-preload-203208" in 35.693µs
	I0307 18:47:33.690566   26384 start.go:96] Skipping create...Using existing machine configuration
	I0307 18:47:33.690574   26384 fix.go:55] fixHost starting: 
	I0307 18:47:33.690832   26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:47:33.690865   26384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:47:33.704555   26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0307 18:47:33.704995   26384 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:47:33.705526   26384 main.go:141] libmachine: Using API Version  1
	I0307 18:47:33.705549   26384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:47:33.705815   26384 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:47:33.706046   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:33.706249   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetState
	I0307 18:47:33.707747   26384 fix.go:103] recreateIfNeeded on test-preload-203208: state=Stopped err=<nil>
	I0307 18:47:33.707767   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	W0307 18:47:33.707933   26384 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 18:47:33.710555   26384 out.go:177] * Restarting existing kvm2 VM for "test-preload-203208" ...
	I0307 18:47:33.712032   26384 main.go:141] libmachine: (test-preload-203208) Calling .Start
	I0307 18:47:33.712220   26384 main.go:141] libmachine: (test-preload-203208) Ensuring networks are active...
	I0307 18:47:33.712842   26384 main.go:141] libmachine: (test-preload-203208) Ensuring network default is active
	I0307 18:47:33.713296   26384 main.go:141] libmachine: (test-preload-203208) Ensuring network mk-test-preload-203208 is active
	I0307 18:47:33.713652   26384 main.go:141] libmachine: (test-preload-203208) Getting domain xml...
	I0307 18:47:33.714346   26384 main.go:141] libmachine: (test-preload-203208) Creating domain...
	I0307 18:47:34.910876   26384 main.go:141] libmachine: (test-preload-203208) Waiting to get IP...
	I0307 18:47:34.911746   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:34.912163   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:34.912255   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:34.912165   26419 retry.go:31] will retry after 212.425256ms: waiting for machine to come up
	I0307 18:47:35.126663   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.127105   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.127129   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.127053   26419 retry.go:31] will retry after 263.969499ms: waiting for machine to come up
	I0307 18:47:35.392652   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.393060   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.393084   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.393015   26419 retry.go:31] will retry after 468.684911ms: waiting for machine to come up
	I0307 18:47:35.863601   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:35.864010   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:35.864033   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.863947   26419 retry.go:31] will retry after 431.412452ms: waiting for machine to come up
	I0307 18:47:36.296448   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:36.296882   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:36.296912   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:36.296828   26419 retry.go:31] will retry after 752.77311ms: waiting for machine to come up
	I0307 18:47:37.050685   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:37.051090   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:37.051119   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.051041   26419 retry.go:31] will retry after 743.261623ms: waiting for machine to come up
	I0307 18:47:37.795856   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:37.796272   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:37.796308   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.796215   26419 retry.go:31] will retry after 1.170690029s: waiting for machine to come up
	I0307 18:47:38.968781   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:38.969233   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:38.969258   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:38.969184   26419 retry.go:31] will retry after 1.337094513s: waiting for machine to come up
	I0307 18:47:40.308636   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:40.309023   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:40.309045   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:40.308986   26419 retry.go:31] will retry after 1.490851661s: waiting for machine to come up
	I0307 18:47:41.801795   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:41.802239   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:41.802269   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:41.802176   26419 retry.go:31] will retry after 2.070649174s: waiting for machine to come up
	I0307 18:47:43.874879   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:43.875349   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:43.875380   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:43.875281   26419 retry.go:31] will retry after 2.737681725s: waiting for machine to come up
	I0307 18:47:46.616128   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:46.616688   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:46.616712   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:46.616637   26419 retry.go:31] will retry after 2.87929565s: waiting for machine to come up
	I0307 18:47:49.497470   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:49.498002   26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
	I0307 18:47:49.498030   26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:49.497932   26419 retry.go:31] will retry after 4.103227875s: waiting for machine to come up
	I0307 18:47:53.606187   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.606663   26384 main.go:141] libmachine: (test-preload-203208) Found IP for machine: 192.168.39.212
	I0307 18:47:53.606696   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has current primary IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.606703   26384 main.go:141] libmachine: (test-preload-203208) Reserving static IP address...
	I0307 18:47:53.607103   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.607138   26384 main.go:141] libmachine: (test-preload-203208) Reserved static IP address: 192.168.39.212
	I0307 18:47:53.607159   26384 main.go:141] libmachine: (test-preload-203208) DBG | skip adding static IP to network mk-test-preload-203208 - found existing host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"}
	I0307 18:47:53.607180   26384 main.go:141] libmachine: (test-preload-203208) DBG | Getting to WaitForSSH function...
	I0307 18:47:53.607195   26384 main.go:141] libmachine: (test-preload-203208) Waiting for SSH to be available...
	I0307 18:47:53.609451   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.609920   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.609952   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.610021   26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH client type: external
	I0307 18:47:53.610088   26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH private key: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa (-rw-------)
	I0307 18:47:53.610128   26384 main.go:141] libmachine: (test-preload-203208) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0307 18:47:53.610153   26384 main.go:141] libmachine: (test-preload-203208) DBG | About to run SSH command:
	I0307 18:47:53.610166   26384 main.go:141] libmachine: (test-preload-203208) DBG | exit 0
	I0307 18:47:53.693376   26384 main.go:141] libmachine: (test-preload-203208) DBG | SSH cmd err, output: <nil>: 
	I0307 18:47:53.693716   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetConfigRaw
	I0307 18:47:53.694380   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:53.696583   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.696983   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.697018   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.697232   26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
	I0307 18:47:53.697422   26384 machine.go:88] provisioning docker machine ...
	I0307 18:47:53.697443   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:53.697627   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.697782   26384 buildroot.go:166] provisioning hostname "test-preload-203208"
	I0307 18:47:53.697798   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.697947   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.699860   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.700195   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.700225   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.700341   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:53.700502   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.700619   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.700716   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:53.700853   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:53.701264   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:53.701276   26384 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-203208 && echo "test-preload-203208" | sudo tee /etc/hostname
	I0307 18:47:53.818077   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-203208
	
	I0307 18:47:53.818106   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.820950   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.821308   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.821334   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.821486   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:53.821689   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.821852   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:53.822005   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:53.822192   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:53.822574   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:53.822590   26384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-203208' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-203208/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-203208' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:47:53.938498   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:47:53.938531   26384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4052/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4052/.minikube}
	I0307 18:47:53.938554   26384 buildroot.go:174] setting up certificates
	I0307 18:47:53.938564   26384 provision.go:83] configureAuth start
	I0307 18:47:53.938577   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
	I0307 18:47:53.938823   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:53.941788   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.942174   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.942193   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.942389   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:53.944344   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.944651   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:53.944679   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:53.944819   26384 provision.go:138] copyHostCerts
	I0307 18:47:53.944864   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem, removing ...
	I0307 18:47:53.944874   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem
	I0307 18:47:53.944936   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem (1123 bytes)
	I0307 18:47:53.945028   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem, removing ...
	I0307 18:47:53.945042   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem
	I0307 18:47:53.945069   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem (1679 bytes)
	I0307 18:47:53.945118   26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem, removing ...
	I0307 18:47:53.945125   26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem
	I0307 18:47:53.945144   26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem (1078 bytes)
	I0307 18:47:53.945185   26384 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem org=jenkins.test-preload-203208 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube test-preload-203208]
	I0307 18:47:54.280078   26384 provision.go:172] copyRemoteCerts
	I0307 18:47:54.280140   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:47:54.280162   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.282745   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.283051   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.283081   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.283221   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.283408   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.283548   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.283668   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.366577   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 18:47:54.389837   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0307 18:47:54.411718   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 18:47:54.433964   26384 provision.go:86] duration metric: configureAuth took 495.388641ms
	I0307 18:47:54.433989   26384 buildroot.go:189] setting minikube options for container-runtime
	I0307 18:47:54.434187   26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0307 18:47:54.434202   26384 machine.go:91] provisioned docker machine in 736.766542ms
	I0307 18:47:54.434211   26384 start.go:300] post-start starting for "test-preload-203208" (driver="kvm2")
	I0307 18:47:54.434220   26384 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:47:54.434345   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.434642   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:47:54.434666   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.437421   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.437782   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.437822   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.437973   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.438168   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.438298   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.438399   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.518617   26384 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:47:54.522870   26384 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 18:47:54.522893   26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/addons for local assets ...
	I0307 18:47:54.522953   26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/files for local assets ...
	I0307 18:47:54.523037   26384 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem -> 111062.pem in /etc/ssl/certs
	I0307 18:47:54.523135   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:47:54.530858   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /etc/ssl/certs/111062.pem (1708 bytes)
	I0307 18:47:54.553945   26384 start.go:303] post-start completed in 119.718718ms
	I0307 18:47:54.553971   26384 fix.go:57] fixHost completed within 20.863395553s
	I0307 18:47:54.553997   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.556837   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.557183   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.557209   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.557405   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.557590   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.557727   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.557837   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.558046   26384 main.go:141] libmachine: Using SSH client type: native
	I0307 18:47:54.558428   26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0307 18:47:54.558440   26384 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 18:47:54.666375   26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214874.615825414
	
	I0307 18:47:54.666396   26384 fix.go:207] guest clock: 1678214874.615825414
	I0307 18:47:54.666406   26384 fix.go:220] Guest: 2023-03-07 18:47:54.615825414 +0000 UTC Remote: 2023-03-07 18:47:54.553975557 +0000 UTC m=+46.403616421 (delta=61.849857ms)
	I0307 18:47:54.666428   26384 fix.go:191] guest clock delta is within tolerance: 61.849857ms
	I0307 18:47:54.666435   26384 start.go:83] releasing machines lock for "test-preload-203208", held for 20.975873468s
	I0307 18:47:54.666460   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.666725   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:47:54.669426   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.669811   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.669848   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.669973   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670422   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670589   26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
	I0307 18:47:54.670656   26384 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0307 18:47:54.670718   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.670826   26384 ssh_runner.go:195] Run: cat /version.json
	I0307 18:47:54.670851   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
	I0307 18:47:54.673445   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673511   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673800   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.673827   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.673938   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:47:54.673967   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:47:54.674023   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.674214   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.674218   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
	I0307 18:47:54.674394   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
	I0307 18:47:54.674402   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.674565   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
	I0307 18:47:54.674569   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.674704   26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
	I0307 18:47:54.759342   26384 ssh_runner.go:195] Run: systemctl --version
	I0307 18:47:54.887421   26384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 18:47:54.893321   26384 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 18:47:54.893397   26384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:47:54.911277   26384 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 18:47:54.911299   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:47:54.911409   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:47:58.947601   26384 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.036162087s)
	I0307 18:47:58.947737   26384 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0307 18:47:58.947802   26384 ssh_runner.go:195] Run: which lz4
	I0307 18:47:58.951928   26384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 18:47:58.955886   26384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 18:47:58.955917   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0307 18:48:00.759696   26384 containerd.go:551] Took 1.807807 seconds to copy over tarball
	I0307 18:48:00.759760   26384 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 18:48:03.914699   26384 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.15491167s)
	I0307 18:48:03.914730   26384 containerd.go:558] Took 3.155008 seconds to extract the tarball
	I0307 18:48:03.914761   26384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 18:48:03.954806   26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:48:04.051307   26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:48:04.067055   26384 start.go:485] detecting cgroup driver to use...
	I0307 18:48:04.067143   26384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 18:48:06.737555   26384 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.670382401s)
	I0307 18:48:06.737634   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:48:06.749559   26384 docker.go:186] disabling cri-docker service (if available) ...
	I0307 18:48:06.749615   26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 18:48:06.761329   26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 18:48:06.773038   26384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 18:48:06.870678   26384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 18:48:06.979667   26384 docker.go:202] disabling docker service ...
	I0307 18:48:06.979735   26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 18:48:06.992492   26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 18:48:07.004415   26384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 18:48:07.107126   26384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 18:48:07.218342   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 18:48:07.230717   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:48:07.248387   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
	I0307 18:48:07.257036   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:48:07.266682   26384 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:48:07.266740   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:48:07.276084   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:48:07.285768   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:48:07.295044   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:48:07.304543   26384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:48:07.314540   26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:48:07.324106   26384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:48:07.332553   26384 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0307 18:48:07.332592   26384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0307 18:48:07.345783   26384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:48:07.354423   26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:48:07.450860   26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:48:07.472878   26384 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 18:48:07.472979   26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:48:07.480739   26384 retry.go:31] will retry after 1.355526534s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0307 18:48:08.836380   26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:48:08.842045   26384 start.go:553] Will wait 60s for crictl version
	I0307 18:48:08.842108   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:48:08.846136   26384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:48:08.879500   26384 start.go:569] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.18
	RuntimeApiVersion:  v1alpha2
	I0307 18:48:08.879555   26384 ssh_runner.go:195] Run: containerd --version
	I0307 18:48:08.907039   26384 ssh_runner.go:195] Run: containerd --version
	I0307 18:48:08.937824   26384 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
	I0307 18:48:08.939189   26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
	I0307 18:48:08.941766   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:48:08.942253   26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
	I0307 18:48:08.942274   26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
	I0307 18:48:08.942470   26384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0307 18:48:08.946333   26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:48:08.958372   26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0307 18:48:08.958447   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:48:08.984433   26384 containerd.go:608] all images are preloaded for containerd runtime.
	I0307 18:48:08.984454   26384 containerd.go:522] Images already preloaded, skipping extraction
	I0307 18:48:08.984503   26384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:48:09.011132   26384 containerd.go:608] all images are preloaded for containerd runtime.
	I0307 18:48:09.011156   26384 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:48:09.011204   26384 ssh_runner.go:195] Run: sudo crictl info
	I0307 18:48:09.039874   26384 cni.go:84] Creating CNI manager for ""
	I0307 18:48:09.039898   26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:48:09.039907   26384 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 18:48:09.039928   26384 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-203208 NodeName:test-preload-203208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 18:48:09.040095   26384 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-203208"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:48:09.040202   26384 kubeadm.go:968] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-203208 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 18:48:09.040264   26384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0307 18:48:09.049030   26384 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:48:09.049088   26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:48:09.057226   26384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
	I0307 18:48:09.073102   26384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:48:09.087939   26384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0307 18:48:09.103091   26384 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0307 18:48:09.106714   26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:48:09.118609   26384 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208 for IP: 192.168.39.212
	I0307 18:48:09.118642   26384 certs.go:186] acquiring lock for shared ca certs: {Name:mk07c09235b5b83043c0b2b2f22c2249661f377a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:48:09.118791   26384 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key
	I0307 18:48:09.118849   26384 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key
	I0307 18:48:09.118912   26384 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key
	I0307 18:48:09.118967   26384 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key.543da273
	I0307 18:48:09.119053   26384 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key
	I0307 18:48:09.119150   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem (1338 bytes)
	W0307 18:48:09.119182   26384 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106_empty.pem, impossibly tiny 0 bytes
	I0307 18:48:09.119193   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 18:48:09.119222   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem (1078 bytes)
	I0307 18:48:09.119259   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:48:09.119296   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem (1679 bytes)
	I0307 18:48:09.119354   26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem (1708 bytes)
	I0307 18:48:09.119887   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 18:48:09.142561   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:48:09.164647   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:48:09.186856   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 18:48:09.209055   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:48:09.233821   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 18:48:09.256607   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:48:09.279276   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 18:48:09.301654   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:48:09.323040   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem --> /usr/share/ca-certificates/11106.pem (1338 bytes)
	I0307 18:48:09.344849   26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /usr/share/ca-certificates/111062.pem (1708 bytes)
	I0307 18:48:09.366857   26384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:48:09.382598   26384 ssh_runner.go:195] Run: openssl version
	I0307 18:48:09.387988   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:48:09.396852   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.401359   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:03 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.401436   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:48:09.406740   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:48:09.415682   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11106.pem && ln -fs /usr/share/ca-certificates/11106.pem /etc/ssl/certs/11106.pem"
	I0307 18:48:09.424547   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.428975   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:09 /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.429015   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11106.pem
	I0307 18:48:09.434193   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11106.pem /etc/ssl/certs/51391683.0"
	I0307 18:48:09.443361   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111062.pem && ln -fs /usr/share/ca-certificates/111062.pem /etc/ssl/certs/111062.pem"
	I0307 18:48:09.452688   26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.457057   26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:09 /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.457108   26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111062.pem
	I0307 18:48:09.462237   26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111062.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:48:09.471411   26384 kubeadm.go:401] StartCluster: {Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:48:09.471554   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 18:48:09.471596   26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:48:09.501095   26384 cri.go:87] found id: ""
	I0307 18:48:09.501172   26384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 18:48:09.510140   26384 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0307 18:48:09.510163   26384 kubeadm.go:633] restartCluster start
	I0307 18:48:09.510218   26384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 18:48:09.518643   26384 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:09.519032   26384 kubeconfig.go:135] verify returned: extract IP: "test-preload-203208" does not appear in /home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:48:09.519129   26384 kubeconfig.go:146] "test-preload-203208" context is missing from /home/jenkins/minikube-integration/15985-4052/kubeconfig - will repair!
	I0307 18:48:09.519386   26384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4052/kubeconfig: {Name:mk89c8bdc0292c804b7314ba2438e95e1215b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:48:09.519958   26384 kapi.go:59] client config for test-preload-203208: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 18:48:09.520801   26384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 18:48:09.528914   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:09.528956   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:09.538990   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:10.039696   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:10.039767   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:10.050769   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:10.539371   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:10.539470   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:10.550785   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:11.039988   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:11.040093   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:11.051278   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:11.539936   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:11.540040   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:11.551371   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:12.040000   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:12.040077   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:12.051583   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:12.539114   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:12.539176   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:12.550419   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:13.040079   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:13.040172   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:13.052432   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:13.540058   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:13.540141   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:13.551703   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:14.039765   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:14.039847   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:14.051403   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:14.540016   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:14.540094   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:14.552136   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:15.039754   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:15.039852   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:15.051397   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:15.539956   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:15.540068   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:15.551741   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:16.039191   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:16.039261   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:16.050954   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:16.539468   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:16.539533   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:16.550947   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:17.039455   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:17.039523   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:17.050527   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:17.539123   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:17.539207   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:17.551333   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:18.039916   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:18.039999   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:18.051774   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:18.539677   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:18.539783   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:18.551481   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.039543   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.039622   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.051157   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.539906   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.539971   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.551522   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.551546   26384 api_server.go:165] Checking apiserver status ...
	I0307 18:48:19.551615   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 18:48:19.562103   26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:48:19.562127   26384 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0307 18:48:19.562135   26384 kubeadm.go:1120] stopping kube-system containers ...
	I0307 18:48:19.562145   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0307 18:48:19.562200   26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:48:19.596473   26384 cri.go:87] found id: ""
	I0307 18:48:19.596545   26384 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 18:48:19.611484   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:48:19.620277   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:48:19.620347   26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:48:19.629402   26384 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0307 18:48:19.629420   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:19.729048   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:20.693486   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.045927   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.125427   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 18:48:21.208989   26384 api_server.go:51] waiting for apiserver process to appear ...
	I0307 18:48:21.209053   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:21.727096   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:22.226678   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:22.726635   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:23.227460   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:23.726652   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:24.226895   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:24.727601   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:25.227632   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:25.727342   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:26.226885   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:26.727250   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:27.226755   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:27.727168   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:28.227623   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:28.726792   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:29.227535   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:29.727199   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:30.227533   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:30.726863   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:31.226913   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:31.726742   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:32.226629   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:32.726562   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:33.227256   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:33.727095   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:34.227636   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:34.727529   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:35.226672   26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:48:35.239643   26384 api_server.go:71] duration metric: took 14.030659958s to wait for apiserver process to appear ...
	I0307 18:48:35.239673   26384 api_server.go:87] waiting for apiserver healthz status ...
	I0307 18:48:35.239689   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:40.240554   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:40.741289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:45.742137   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:46.240766   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:51.241530   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:48:51.740794   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:55.622725   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:40614->192.168.39.212:8443: read: connection reset by peer
	I0307 18:48:55.741069   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:55.741730   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:56.241350   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:56.241974   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:56.741625   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:56.742311   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:57.240872   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:57.241486   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:57.741098   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:57.741815   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:58.240688   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:58.241449   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:58.740916   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:58.741450   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:59.241002   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:59.241562   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:48:59.741376   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:48:59.741967   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:00.241554   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:00.242185   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:00.740765   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:00.741366   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:01.240922   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:01.241524   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:01.741093   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:01.741672   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:02.241289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:02.241821   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:02.741466   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:02.742055   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:03.240707   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:03.241321   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:03.741112   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:03.741706   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:04.241289   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:04.241805   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:04.741475   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:04.742120   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:05.240659   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:05.241205   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:05.740827   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:05.741407   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:06.240957   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:06.241520   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:06.741097   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:06.741687   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:07.241323   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:07.241898   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:07.741557   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:07.742492   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:08.241389   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:08.242007   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:08.741481   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:08.742046   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:09.240755   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:09.241344   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:09.741175   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:09.741776   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:10.241384   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:10.242065   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:10.741689   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:10.742367   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:11.240908   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:11.241508   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:11.741066   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:11.741702   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:12.241340   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:12.241992   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:12.741591   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:12.742200   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:13.240991   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:13.241618   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:13.741474   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:13.742095   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:14.240668   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:14.241302   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:14.740851   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:14.741426   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:15.240983   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:15.241592   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:15.741169   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:15.741706   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:16.241315   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:16.241927   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:16.741520   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:16.742200   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:17.240744   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:17.241351   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:17.740916   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:22.742180   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:23.240982   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:28.241459   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:28.740696   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:33.740940   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:49:34.241557   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:37.998029   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:36774->192.168.39.212:8443: read: connection reset by peer
	I0307 18:49:38.240706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:38.240797   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:38.274793   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:38.274811   26384 cri.go:87] found id: "5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	I0307 18:49:38.274816   26384 cri.go:87] found id: ""
	I0307 18:49:38.274822   26384 logs.go:277] 2 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]
	I0307 18:49:38.274884   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.279183   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.283139   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:38.283194   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:38.310826   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:38.310844   26384 cri.go:87] found id: ""
	I0307 18:49:38.310850   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:38.310891   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.314471   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:38.314538   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:38.344851   26384 cri.go:87] found id: ""
	I0307 18:49:38.344881   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.344889   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:38.344894   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:38.344965   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:38.377525   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:38.377548   26384 cri.go:87] found id: ""
	I0307 18:49:38.377555   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:38.377609   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.381815   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:38.381869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:38.417825   26384 cri.go:87] found id: ""
	I0307 18:49:38.417845   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.417851   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:38.417855   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:38.417925   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:38.454042   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:38.454062   26384 cri.go:87] found id: "a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	I0307 18:49:38.454066   26384 cri.go:87] found id: ""
	I0307 18:49:38.454073   26384 logs.go:277] 2 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]
	I0307 18:49:38.454130   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.458203   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:38.461976   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:38.462036   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:38.498530   26384 cri.go:87] found id: ""
	I0307 18:49:38.498555   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.498566   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:38.498573   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:38.498623   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:38.545888   26384 cri.go:87] found id: ""
	I0307 18:49:38.545918   26384 logs.go:277] 0 containers: []
	W0307 18:49:38.545926   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:38.545936   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:38.545952   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:38.596180   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:38.596211   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:38.657673   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:38.657718   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:38.670963   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:38.670998   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:38.710963   26384 logs.go:123] Gathering logs for kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a] ...
	I0307 18:49:38.710992   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	W0307 18:49:38.740233   26384 logs.go:130] failed kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:38.717772    1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:38.717772    1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
	
	** /stderr **
	I0307 18:49:38.740259   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:38.740272   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:38.769176   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:38.769208   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:38.816001   26384 logs.go:123] Gathering logs for kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4] ...
	I0307 18:49:38.816029   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	W0307 18:49:38.847807   26384 logs.go:130] failed kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:38.825690    1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:38.825690    1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
	time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
	
	** /stderr **
	I0307 18:49:38.847829   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:38.847839   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:38.960358   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:38.960378   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:38.960391   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:39.024178   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:39.024209   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:41.561116   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:41.561705   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:41.741078   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:41.741163   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:41.770944   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:41.770967   26384 cri.go:87] found id: ""
	I0307 18:49:41.770975   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:41.771032   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.774913   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:41.774977   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:41.802816   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:41.802838   26384 cri.go:87] found id: ""
	I0307 18:49:41.802847   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:41.802892   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.806570   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:41.806610   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:41.835237   26384 cri.go:87] found id: ""
	I0307 18:49:41.835270   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.835276   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:41.835281   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:41.835337   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:41.870305   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:41.870323   26384 cri.go:87] found id: ""
	I0307 18:49:41.870329   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:41.870376   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.874332   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:41.874383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:41.901971   26384 cri.go:87] found id: ""
	I0307 18:49:41.901993   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.901999   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:41.902005   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:41.902057   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:41.929792   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:41.929823   26384 cri.go:87] found id: ""
	I0307 18:49:41.929834   26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:41.929885   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:41.933861   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:41.933945   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:41.962195   26384 cri.go:87] found id: ""
	I0307 18:49:41.962222   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.962230   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:41.962237   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:41.962290   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:41.990939   26384 cri.go:87] found id: ""
	I0307 18:49:41.990965   26384 logs.go:277] 0 containers: []
	W0307 18:49:41.990972   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:41.990984   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:41.990994   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:42.052031   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:42.052054   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:42.052069   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:42.081594   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:42.081622   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:42.109456   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:42.109493   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:42.177139   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:42.177180   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:42.226652   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:42.226679   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:42.287629   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:42.287659   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:42.299095   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:42.299115   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:42.340655   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:42.340684   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:44.881007   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:44.881568   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:45.241058   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:45.241130   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:45.268565   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:45.268588   26384 cri.go:87] found id: ""
	I0307 18:49:45.268596   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:45.268650   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.272618   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:45.272685   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:45.299447   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:45.299471   26384 cri.go:87] found id: ""
	I0307 18:49:45.299479   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:45.299528   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.303332   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:45.303397   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:45.332836   26384 cri.go:87] found id: ""
	I0307 18:49:45.332863   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.332873   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:45.332881   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:45.332989   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:45.359776   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:45.359795   26384 cri.go:87] found id: ""
	I0307 18:49:45.359805   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:45.359864   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.363663   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:45.363725   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:45.389419   26384 cri.go:87] found id: ""
	I0307 18:49:45.389448   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.389459   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:45.389465   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:45.389523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:45.415773   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:45.415796   26384 cri.go:87] found id: ""
	I0307 18:49:45.415804   26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:45.415860   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:45.419687   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:45.419754   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:45.448748   26384 cri.go:87] found id: ""
	I0307 18:49:45.448777   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.448786   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:45.448791   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:45.448854   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:45.474641   26384 cri.go:87] found id: ""
	I0307 18:49:45.474669   26384 logs.go:277] 0 containers: []
	W0307 18:49:45.474679   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:45.474696   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:45.474711   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:45.486226   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:45.486249   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:45.545694   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:45.545714   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:45.545726   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:45.591466   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:45.591493   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:45.623810   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:45.623841   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:45.686240   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:45.686268   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:45.720278   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:45.720302   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:45.745876   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:45.745913   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:45.809485   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:45.809518   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:48.348770   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:48.349502   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:48.741584   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:48.741651   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:48.777550   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:48.777572   26384 cri.go:87] found id: ""
	I0307 18:49:48.777578   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:48.777636   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.782172   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:48.782233   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:48.818792   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:48.818817   26384 cri.go:87] found id: ""
	I0307 18:49:48.818824   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:48.818869   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.823044   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:48.823106   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:48.857459   26384 cri.go:87] found id: ""
	I0307 18:49:48.857484   26384 logs.go:277] 0 containers: []
	W0307 18:49:48.857491   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:48.857498   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:48.857556   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:48.889707   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:48.889728   26384 cri.go:87] found id: ""
	I0307 18:49:48.889735   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:48.889778   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.894345   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:48.894420   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:48.933590   26384 cri.go:87] found id: ""
	I0307 18:49:48.933610   26384 logs.go:277] 0 containers: []
	W0307 18:49:48.933617   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:48.933622   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:48.933667   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:48.967476   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:48.967495   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:48.967499   26384 cri.go:87] found id: ""
	I0307 18:49:48.967506   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:48.967549   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.971759   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:48.975656   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:48.975714   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:49.026784   26384 cri.go:87] found id: ""
	I0307 18:49:49.026821   26384 logs.go:277] 0 containers: []
	W0307 18:49:49.026831   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:49.026839   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:49.026900   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:49.055435   26384 cri.go:87] found id: ""
	I0307 18:49:49.055458   26384 logs.go:277] 0 containers: []
	W0307 18:49:49.055465   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:49.055476   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:49.055490   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:49.089020   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:49.089048   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:49.138877   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:49.138913   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:49.153088   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:49.153113   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:49.220054   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:49.220079   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:49.220098   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:49.260102   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:49.260132   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:49.288829   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:49.288855   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:49.360373   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:49.360411   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:49.390432   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:49.390471   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:49.438326   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:49.438360   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:51.999825   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:52.000476   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:52.240790   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:52.240869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:52.268760   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:52.268782   26384 cri.go:87] found id: ""
	I0307 18:49:52.268790   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:52.268860   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.273290   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:52.273355   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:52.303004   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:52.303024   26384 cri.go:87] found id: ""
	I0307 18:49:52.303031   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:52.303070   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.307394   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:52.307454   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:52.334227   26384 cri.go:87] found id: ""
	I0307 18:49:52.334252   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.334259   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:52.334263   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:52.334308   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:52.365944   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:52.365964   26384 cri.go:87] found id: ""
	I0307 18:49:52.365971   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:52.366014   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.369575   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:52.369631   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:52.399970   26384 cri.go:87] found id: ""
	I0307 18:49:52.399998   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.400008   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:52.400015   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:52.400080   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:52.428372   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:52.428394   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:52.428399   26384 cri.go:87] found id: ""
	I0307 18:49:52.428404   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:52.428452   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.432426   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:52.436419   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:52.436468   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:52.465745   26384 cri.go:87] found id: ""
	I0307 18:49:52.465777   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.465786   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:52.465794   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:52.465851   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:52.493993   26384 cri.go:87] found id: ""
	I0307 18:49:52.494022   26384 logs.go:277] 0 containers: []
	W0307 18:49:52.494032   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:52.494048   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:52.494063   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:52.562310   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:52.562349   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:52.601842   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:52.601867   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:52.663702   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:52.663735   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:52.676175   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:52.676205   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:52.725457   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:52.725478   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:52.725491   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:52.773421   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:52.773446   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:52.820180   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:52.820212   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:52.854035   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:52.854060   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:52.882963   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:52.882993   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:55.412727   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:55.413292   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:55.740694   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:55.740782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:55.769593   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:55.769617   26384 cri.go:87] found id: ""
	I0307 18:49:55.769624   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:55.769675   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.773846   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:55.773918   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:55.799820   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:55.799844   26384 cri.go:87] found id: ""
	I0307 18:49:55.799852   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:55.799904   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.803655   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:55.803714   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:55.830795   26384 cri.go:87] found id: ""
	I0307 18:49:55.830820   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.830829   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:55.830840   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:55.830892   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:55.861486   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:55.861511   26384 cri.go:87] found id: ""
	I0307 18:49:55.861519   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:55.861571   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.865664   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:55.865712   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:55.892035   26384 cri.go:87] found id: ""
	I0307 18:49:55.892057   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.892067   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:55.892074   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:55.892122   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:55.921473   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:55.921491   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:55.921503   26384 cri.go:87] found id: ""
	I0307 18:49:55.921511   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:55.921560   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.925654   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:55.929475   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:55.929539   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:55.956526   26384 cri.go:87] found id: ""
	I0307 18:49:55.956559   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.956566   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:55.956571   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:55.956614   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:55.983852   26384 cri.go:87] found id: ""
	I0307 18:49:55.983873   26384 logs.go:277] 0 containers: []
	W0307 18:49:55.983879   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:55.983891   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:55.983905   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:56.013373   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:56.013404   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:56.075477   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:56.075514   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:56.134932   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:56.134953   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:56.134963   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:56.162676   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:56.162702   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:56.205835   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:56.205864   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:56.254193   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:56.254226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:49:56.291170   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:56.291199   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:56.303219   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:56.303244   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:56.338501   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:56.338530   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:58.906800   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:49:58.907377   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:49:59.240745   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:49:59.240816   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:49:59.270117   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:59.270138   26384 cri.go:87] found id: ""
	I0307 18:49:59.270148   26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:49:59.270194   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.277486   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:49:59.277555   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:49:59.319990   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:59.320008   26384 cri.go:87] found id: ""
	I0307 18:49:59.320015   26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:49:59.320056   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.324577   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:49:59.324620   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:49:59.355279   26384 cri.go:87] found id: ""
	I0307 18:49:59.355308   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.355318   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:49:59.355325   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:49:59.355383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:49:59.385970   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:59.386019   26384 cri.go:87] found id: ""
	I0307 18:49:59.386029   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:49:59.386084   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.389898   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:49:59.389957   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:49:59.418100   26384 cri.go:87] found id: ""
	I0307 18:49:59.418123   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.418132   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:49:59.418141   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:49:59.418199   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:49:59.448963   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:59.448984   26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	I0307 18:49:59.448990   26384 cri.go:87] found id: ""
	I0307 18:49:59.448998   26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
	I0307 18:49:59.449053   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.452973   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:49:59.456699   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:49:59.456745   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:49:59.487041   26384 cri.go:87] found id: ""
	I0307 18:49:59.487066   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.487075   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:49:59.487081   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:49:59.487141   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:49:59.520702   26384 cri.go:87] found id: ""
	I0307 18:49:59.520733   26384 logs.go:277] 0 containers: []
	W0307 18:49:59.520744   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:49:59.520756   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:49:59.520770   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:49:59.534981   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:49:59.535020   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:49:59.571150   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:49:59.571176   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:49:59.608785   26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
	I0307 18:49:59.608815   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	W0307 18:49:59.635030   26384 logs.go:130] failed kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:49:59.613980    2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
	 output: 
	** stderr ** 
	E0307 18:49:59.613980    2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
	time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
	
	** /stderr **
	I0307 18:49:59.635047   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:49:59.635057   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:49:59.681919   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:49:59.681947   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:49:59.738173   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:49:59.738205   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:49:59.789970   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:49:59.789991   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:49:59.790005   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:49:59.859269   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:49:59.859302   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:49:59.901677   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:49:59.901708   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:02.439332   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:07.439703   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0307 18:50:07.741227   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:07.741304   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:07.771935   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:07.771958   26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:50:07.771964   26384 cri.go:87] found id: ""
	I0307 18:50:07.771972   26384 logs.go:277] 2 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
	I0307 18:50:07.772033   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.775931   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.779533   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:07.779583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:07.807355   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:07.807372   26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	I0307 18:50:07.807376   26384 cri.go:87] found id: ""
	I0307 18:50:07.807382   26384 logs.go:277] 2 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
	I0307 18:50:07.807423   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.810941   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.814428   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:07.814480   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:07.840502   26384 cri.go:87] found id: ""
	I0307 18:50:07.840530   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.840537   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:07.840543   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:07.840590   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:07.872460   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:07.872482   26384 cri.go:87] found id: ""
	I0307 18:50:07.872490   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:07.872532   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.876167   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:07.876234   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:07.902163   26384 cri.go:87] found id: ""
	I0307 18:50:07.902185   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.902194   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:07.902203   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:07.902264   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:07.934206   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:07.934234   26384 cri.go:87] found id: ""
	I0307 18:50:07.934244   26384 logs.go:277] 1 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:07.934302   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:07.937973   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:07.938062   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:07.969362   26384 cri.go:87] found id: ""
	I0307 18:50:07.969395   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.969406   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:07.969413   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:07.969476   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:07.996288   26384 cri.go:87] found id: ""
	I0307 18:50:07.996313   26384 logs.go:277] 0 containers: []
	W0307 18:50:07.996322   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:07.996332   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:07.996346   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:08.022863   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:08.022893   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:08.072434   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:08.072467   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:08.110215   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:08.110244   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:08.139123   26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
	I0307 18:50:08.139152   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
	I0307 18:50:08.172722   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:08.172748   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:50:22.210905   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.038132901s)
	W0307 18:50:22.210954   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:22.210963   26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
	I0307 18:50:22.210973   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	W0307 18:50:22.243161   26384 logs.go:130] failed etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7": Process exited with status 1
	stdout:
	
	stderr:
	E0307 18:50:22.230070    2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
	 output: 
	** stderr ** 
	E0307 18:50:22.230070    2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
	time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
	
	** /stderr **
	I0307 18:50:22.243182   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:22.243194   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:22.312610   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:22.312647   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:22.376483   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:22.376512   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:22.441347   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:22.441379   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:24.956249   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:24.956843   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:25.241295   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:25.241366   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:25.271038   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:25.271057   26384 cri.go:87] found id: ""
	I0307 18:50:25.271063   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:25.271112   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.275131   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:25.275189   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:25.304102   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:25.304122   26384 cri.go:87] found id: ""
	I0307 18:50:25.304131   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:25.304176   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.308112   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:25.308165   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:25.335593   26384 cri.go:87] found id: ""
	I0307 18:50:25.335621   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.335631   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:25.335639   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:25.335696   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:25.366744   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:25.366765   26384 cri.go:87] found id: ""
	I0307 18:50:25.366773   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:25.366814   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.370479   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:25.370523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:25.397628   26384 cri.go:87] found id: ""
	I0307 18:50:25.397651   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.397657   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:25.397662   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:25.397703   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:25.424370   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:25.424388   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:25.424392   26384 cri.go:87] found id: ""
	I0307 18:50:25.424399   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:25.424438   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.428375   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:25.432135   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:25.432197   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:25.464666   26384 cri.go:87] found id: ""
	I0307 18:50:25.464686   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.464693   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:25.464698   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:25.464754   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:25.495748   26384 cri.go:87] found id: ""
	I0307 18:50:25.495771   26384 logs.go:277] 0 containers: []
	W0307 18:50:25.495778   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:25.495798   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:25.495816   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:25.552387   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:25.552409   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:25.552419   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:25.585072   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:25.585100   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:25.612624   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:25.612652   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:25.642351   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:25.642375   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:25.696054   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:25.696080   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:25.759230   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:25.759261   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:25.771377   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:25.771400   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:25.814932   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:25.814958   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:25.880431   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:25.880462   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:28.429316   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:28.430023   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:28.740900   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:28.740981   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:28.771490   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:28.771510   26384 cri.go:87] found id: ""
	I0307 18:50:28.771517   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:28.771573   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.775481   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:28.775544   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:28.803618   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:28.803637   26384 cri.go:87] found id: ""
	I0307 18:50:28.803644   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:28.803682   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.807610   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:28.807656   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:28.837030   26384 cri.go:87] found id: ""
	I0307 18:50:28.837048   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.837053   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:28.837058   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:28.837105   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:28.868318   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:28.868344   26384 cri.go:87] found id: ""
	I0307 18:50:28.868353   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:28.868412   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.872041   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:28.872096   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:28.900155   26384 cri.go:87] found id: ""
	I0307 18:50:28.900186   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.900195   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:28.900206   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:28.900266   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:28.928973   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:28.929007   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:28.929014   26384 cri.go:87] found id: ""
	I0307 18:50:28.929022   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:28.929080   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.932963   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:28.936674   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:28.936728   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:28.965932   26384 cri.go:87] found id: ""
	I0307 18:50:28.965955   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.965965   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:28.965972   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:28.966027   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:28.996172   26384 cri.go:87] found id: ""
	I0307 18:50:28.996202   26384 logs.go:277] 0 containers: []
	W0307 18:50:28.996213   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:28.996230   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:28.996252   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:29.027476   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:29.027505   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:29.068982   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:29.069007   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:29.123121   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:29.123155   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:29.154965   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:29.154990   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:29.221021   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:29.221051   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:29.275777   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:29.275800   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:29.275817   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:29.305802   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:29.305836   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:29.374935   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:29.374971   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:29.404375   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:29.404401   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:31.916470   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:31.917095   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:32.241577   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:32.241647   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:32.273069   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:32.273102   26384 cri.go:87] found id: ""
	I0307 18:50:32.273108   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:32.273164   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.277800   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:32.277842   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:32.312694   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:32.312722   26384 cri.go:87] found id: ""
	I0307 18:50:32.312732   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:32.312778   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.316764   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:32.316809   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:32.348032   26384 cri.go:87] found id: ""
	I0307 18:50:32.348049   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.348054   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:32.348059   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:32.348116   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:32.382261   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:32.382286   26384 cri.go:87] found id: ""
	I0307 18:50:32.382297   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:32.382355   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.386519   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:32.386583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:32.423869   26384 cri.go:87] found id: ""
	I0307 18:50:32.423890   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.423897   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:32.423902   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:32.423964   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:32.461514   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:32.461538   26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:32.461545   26384 cri.go:87] found id: ""
	I0307 18:50:32.461553   26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
	I0307 18:50:32.461606   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.465604   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:32.469437   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:32.469474   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:32.507355   26384 cri.go:87] found id: ""
	I0307 18:50:32.507376   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.507388   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:32.507395   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:32.507451   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:32.545202   26384 cri.go:87] found id: ""
	I0307 18:50:32.545230   26384 logs.go:277] 0 containers: []
	W0307 18:50:32.545240   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:32.545257   26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
	I0307 18:50:32.545270   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
	I0307 18:50:32.598969   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:32.598996   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:32.666940   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:32.666972   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:32.724486   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:32.724506   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:32.724516   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:32.758363   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:32.758389   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:32.838189   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:32.838228   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:32.891708   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:32.891740   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:32.903720   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:32.903746   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:32.936722   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:32.936745   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:32.969027   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:32.969055   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:35.524418   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:35.525031   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:35.741445   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:35.741534   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:35.771644   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:35.771665   26384 cri.go:87] found id: ""
	I0307 18:50:35.771673   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:35.771733   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.775944   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:35.776002   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:35.807438   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:35.807455   26384 cri.go:87] found id: ""
	I0307 18:50:35.807464   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:35.807512   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.811521   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:35.811577   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:35.839719   26384 cri.go:87] found id: ""
	I0307 18:50:35.839739   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.839746   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:35.839751   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:35.839801   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:35.870068   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:35.870089   26384 cri.go:87] found id: ""
	I0307 18:50:35.870096   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:35.870139   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.873953   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:35.874009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:35.907548   26384 cri.go:87] found id: ""
	I0307 18:50:35.907576   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.907584   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:35.907589   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:35.907648   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:35.938809   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:35.938828   26384 cri.go:87] found id: ""
	I0307 18:50:35.938834   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:35.938888   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:35.943995   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:35.944045   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:35.971387   26384 cri.go:87] found id: ""
	I0307 18:50:35.971406   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.971413   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:35.971420   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:35.971470   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:35.998911   26384 cri.go:87] found id: ""
	I0307 18:50:35.998938   26384 logs.go:277] 0 containers: []
	W0307 18:50:35.998965   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:35.998982   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:35.999012   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:36.038815   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:36.038848   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:36.077044   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:36.077071   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:36.129558   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:36.129591   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:36.129604   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:36.166935   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:36.166960   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:36.195852   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:36.195882   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:36.271088   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:36.271123   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:36.326628   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:36.326662   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:36.389379   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:36.389411   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:38.901954   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:38.902491   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:39.240923   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:39.241009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:39.271083   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:39.271107   26384 cri.go:87] found id: ""
	I0307 18:50:39.271116   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:39.271171   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.275511   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:39.275567   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:39.306601   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:39.306618   26384 cri.go:87] found id: ""
	I0307 18:50:39.306625   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:39.306672   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.311169   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:39.311223   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:39.341921   26384 cri.go:87] found id: ""
	I0307 18:50:39.341940   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.341945   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:39.341951   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:39.342005   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:39.370475   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:39.370499   26384 cri.go:87] found id: ""
	I0307 18:50:39.370509   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:39.370560   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.374423   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:39.374480   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:39.404780   26384 cri.go:87] found id: ""
	I0307 18:50:39.404801   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.404809   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:39.404819   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:39.404877   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:39.435660   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:39.435684   26384 cri.go:87] found id: ""
	I0307 18:50:39.435692   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:39.435746   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:39.439799   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:39.439857   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:39.468225   26384 cri.go:87] found id: ""
	I0307 18:50:39.468250   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.468259   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:39.468267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:39.468325   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:39.500922   26384 cri.go:87] found id: ""
	I0307 18:50:39.500949   26384 logs.go:277] 0 containers: []
	W0307 18:50:39.500958   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:39.500982   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:39.500995   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:39.530882   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:39.530921   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:39.600657   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:39.600685   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:39.649285   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:39.649317   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:39.697957   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:39.697989   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:39.759513   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:39.759544   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:39.772345   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:39.772373   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:39.831389   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:39.831411   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:39.831421   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:39.864274   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:39.864314   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:42.400891   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:42.401466   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:42.740872   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:42.740939   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:42.768431   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:42.768453   26384 cri.go:87] found id: ""
	I0307 18:50:42.768460   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:42.768513   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.772288   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:42.772331   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:42.798526   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:42.798553   26384 cri.go:87] found id: ""
	I0307 18:50:42.798562   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:42.798603   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.802234   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:42.802282   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:42.828743   26384 cri.go:87] found id: ""
	I0307 18:50:42.828762   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.828769   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:42.828774   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:42.828825   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:42.856471   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:42.856494   26384 cri.go:87] found id: ""
	I0307 18:50:42.856501   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:42.856546   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.860506   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:42.860571   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:42.886392   26384 cri.go:87] found id: ""
	I0307 18:50:42.886416   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.886423   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:42.886428   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:42.886474   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:42.913452   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:42.913478   26384 cri.go:87] found id: ""
	I0307 18:50:42.913487   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:42.913532   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:42.917323   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:42.917383   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:42.943946   26384 cri.go:87] found id: ""
	I0307 18:50:42.943964   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.943970   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:42.943975   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:42.944025   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:42.969863   26384 cri.go:87] found id: ""
	I0307 18:50:42.969888   26384 logs.go:277] 0 containers: []
	W0307 18:50:42.969896   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:42.969927   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:42.969944   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:43.027701   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:43.027737   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:43.041018   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:43.041051   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:43.090630   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:43.090658   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:43.090670   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:43.162692   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:43.162728   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:43.208000   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:43.208025   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:43.241826   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:43.241853   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:43.272472   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:43.272497   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:43.323281   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:43.323311   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:45.854952   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:45.855553   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:46.241035   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:46.241121   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:46.274554   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:46.274576   26384 cri.go:87] found id: ""
	I0307 18:50:46.274583   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:46.274637   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.278942   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:46.278994   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:46.307295   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:46.307313   26384 cri.go:87] found id: ""
	I0307 18:50:46.307320   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:46.307363   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.311114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:46.311163   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:46.341762   26384 cri.go:87] found id: ""
	I0307 18:50:46.341780   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.341787   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:46.341792   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:46.341852   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:46.374164   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:46.374187   26384 cri.go:87] found id: ""
	I0307 18:50:46.374196   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:46.374252   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.378131   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:46.378201   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:46.406158   26384 cri.go:87] found id: ""
	I0307 18:50:46.406176   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.406182   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:46.406188   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:46.406230   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:46.434896   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:46.434922   26384 cri.go:87] found id: ""
	I0307 18:50:46.434931   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:46.434985   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:46.438785   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:46.438842   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:46.469078   26384 cri.go:87] found id: ""
	I0307 18:50:46.469100   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.469107   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:46.469113   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:46.469178   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:46.500068   26384 cri.go:87] found id: ""
	I0307 18:50:46.500096   26384 logs.go:277] 0 containers: []
	W0307 18:50:46.500105   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:46.500117   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:46.500128   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:46.537674   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:46.537702   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:46.599647   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:46.599677   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:46.611626   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:46.611656   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:46.664489   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:46.664513   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:46.664526   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:46.698473   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:46.698501   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:46.730118   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:46.730147   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:46.777380   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:46.777407   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:46.827387   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:46.827416   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:49.400363   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:49.400915   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:49.741647   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:49.741733   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:49.774027   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:49.774056   26384 cri.go:87] found id: ""
	I0307 18:50:49.774065   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:49.774123   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.778228   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:49.778286   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:49.807806   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:49.807832   26384 cri.go:87] found id: ""
	I0307 18:50:49.807841   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:49.807884   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.811537   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:49.811584   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:49.839443   26384 cri.go:87] found id: ""
	I0307 18:50:49.839468   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.839477   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:49.839485   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:49.839543   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:49.868206   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:49.868225   26384 cri.go:87] found id: ""
	I0307 18:50:49.868232   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:49.868273   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.871988   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:49.872029   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:49.903763   26384 cri.go:87] found id: ""
	I0307 18:50:49.903790   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.903802   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:49.903809   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:49.903869   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:49.931386   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:49.931408   26384 cri.go:87] found id: ""
	I0307 18:50:49.931417   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:49.931470   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:49.935416   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:49.935472   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:49.964413   26384 cri.go:87] found id: ""
	I0307 18:50:49.964442   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.964451   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:49.964457   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:49.964519   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:49.995371   26384 cri.go:87] found id: ""
	I0307 18:50:49.995400   26384 logs.go:277] 0 containers: []
	W0307 18:50:49.995410   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:49.995428   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:49.995443   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:50.027383   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:50.027415   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:50.102948   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:50.102987   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:50.153563   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:50.153595   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:50.187209   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:50.187240   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:50.252908   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:50.252940   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:50.265236   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:50.265260   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:50.319484   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:50.319506   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:50.319518   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:50.349093   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:50.349119   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:52.888932   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:52.889665   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:53.241383   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:53.241454   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:53.270824   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:53.270844   26384 cri.go:87] found id: ""
	I0307 18:50:53.270851   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:53.270903   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.274602   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:53.274642   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:53.307455   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:53.307483   26384 cri.go:87] found id: ""
	I0307 18:50:53.307492   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:53.307545   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.311591   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:53.311651   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:53.339718   26384 cri.go:87] found id: ""
	I0307 18:50:53.339742   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.339751   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:53.339758   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:53.339811   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:53.369697   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:53.369729   26384 cri.go:87] found id: ""
	I0307 18:50:53.369739   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:53.369781   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.373719   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:53.373782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:53.401736   26384 cri.go:87] found id: ""
	I0307 18:50:53.401754   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.401760   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:53.401764   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:53.401823   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:53.432212   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:53.432236   26384 cri.go:87] found id: ""
	I0307 18:50:53.432244   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:53.432301   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:53.436390   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:53.436449   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:53.465471   26384 cri.go:87] found id: ""
	I0307 18:50:53.465500   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.465518   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:53.465525   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:53.465583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:53.493404   26384 cri.go:87] found id: ""
	I0307 18:50:53.493431   26384 logs.go:277] 0 containers: []
	W0307 18:50:53.493440   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:53.493455   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:53.493468   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:53.556791   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:53.556823   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:53.568973   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:53.568992   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:53.621325   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:53.621345   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:53.621356   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:53.662717   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:53.662744   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:53.693831   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:53.693855   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:53.731078   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:53.731104   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:53.759392   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:53.759416   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:53.827438   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:53.827472   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:56.380799   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:56.381488   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:50:56.740948   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:50:56.741023   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:50:56.777942   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:56.777966   26384 cri.go:87] found id: ""
	I0307 18:50:56.777977   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:50:56.778023   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.782180   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:50:56.782230   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:50:56.810835   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:56.810861   26384 cri.go:87] found id: ""
	I0307 18:50:56.810870   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:50:56.810916   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.814853   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:50:56.814919   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:50:56.842426   26384 cri.go:87] found id: ""
	I0307 18:50:56.842451   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.842459   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:50:56.842465   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:50:56.842517   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:50:56.877177   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:56.877204   26384 cri.go:87] found id: ""
	I0307 18:50:56.877212   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:50:56.877269   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.881405   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:50:56.881477   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:50:56.913559   26384 cri.go:87] found id: ""
	I0307 18:50:56.913584   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.913594   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:50:56.913602   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:50:56.913659   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:50:56.941955   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:56.941979   26384 cri.go:87] found id: ""
	I0307 18:50:56.941987   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:50:56.942045   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:50:56.946194   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:50:56.946260   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:50:56.978326   26384 cri.go:87] found id: ""
	I0307 18:50:56.978349   26384 logs.go:277] 0 containers: []
	W0307 18:50:56.978355   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:50:56.978361   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:50:56.978420   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:50:57.007950   26384 cri.go:87] found id: ""
	I0307 18:50:57.007973   26384 logs.go:277] 0 containers: []
	W0307 18:50:57.007979   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:50:57.007990   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:50:57.008004   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:50:57.079815   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:50:57.079853   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:50:57.120095   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:50:57.120125   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:50:57.180846   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:50:57.180881   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:50:57.193148   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:50:57.193171   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:50:57.246199   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:50:57.246224   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:50:57.246238   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:50:57.299491   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:50:57.299528   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:50:57.335019   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:50:57.335052   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:50:57.363632   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:50:57.363662   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:50:59.901204   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:50:59.901827   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:00.241273   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:00.241359   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:00.271191   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:00.271210   26384 cri.go:87] found id: ""
	I0307 18:51:00.271217   26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:51:00.271260   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.276060   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:00.276095   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:00.313616   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:00.313635   26384 cri.go:87] found id: ""
	I0307 18:51:00.313642   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:00.313691   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.317695   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:00.317746   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:00.354185   26384 cri.go:87] found id: ""
	I0307 18:51:00.354202   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.354210   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:00.354217   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:00.354272   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:00.388615   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:00.388637   26384 cri.go:87] found id: ""
	I0307 18:51:00.388646   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:00.388708   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.392706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:00.392764   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:00.419909   26384 cri.go:87] found id: ""
	I0307 18:51:00.419930   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.419937   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:00.419942   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:00.419989   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:00.448896   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:00.448921   26384 cri.go:87] found id: ""
	I0307 18:51:00.448929   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:00.448982   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:00.452787   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:00.452848   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:00.482963   26384 cri.go:87] found id: ""
	I0307 18:51:00.482983   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.482989   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:00.482994   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:00.483049   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:00.510864   26384 cri.go:87] found id: ""
	I0307 18:51:00.510894   26384 logs.go:277] 0 containers: []
	W0307 18:51:00.510905   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:00.510922   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:00.510938   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:00.584622   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:00.584656   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:00.620966   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:00.620997   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:00.633989   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:00.634015   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:00.685115   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:00.685136   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:51:00.685145   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:00.722939   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:00.722971   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:00.751368   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:00.751399   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:00.814202   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:00.814234   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:00.855965   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:00.855990   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:03.406623   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:03.407166   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:03.740702   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:03.740777   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:03.774539   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:03.774560   26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:03.774567   26384 cri.go:87] found id: ""
	I0307 18:51:03.774575   26384 logs.go:277] 2 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
	I0307 18:51:03.774639   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.778696   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.782771   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:03.782817   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:03.818150   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:03.818173   26384 cri.go:87] found id: ""
	I0307 18:51:03.818182   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:03.818226   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.822385   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:03.822442   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:03.855669   26384 cri.go:87] found id: ""
	I0307 18:51:03.855697   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.855706   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:03.855713   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:03.855765   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:03.888270   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:03.888297   26384 cri.go:87] found id: ""
	I0307 18:51:03.888304   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:03.888346   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.892269   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:03.892332   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:03.920187   26384 cri.go:87] found id: ""
	I0307 18:51:03.920221   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.920232   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:03.920239   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:03.920296   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:03.953587   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:03.953613   26384 cri.go:87] found id: ""
	I0307 18:51:03.953620   26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:03.953664   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:03.957799   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:03.957864   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:03.990134   26384 cri.go:87] found id: ""
	I0307 18:51:03.990163   26384 logs.go:277] 0 containers: []
	W0307 18:51:03.990173   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:03.990180   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:03.990252   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:04.027162   26384 cri.go:87] found id: ""
	I0307 18:51:04.027193   26384 logs.go:277] 0 containers: []
	W0307 18:51:04.027203   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:04.027222   26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
	I0307 18:51:04.027242   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
	I0307 18:51:04.067517   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:04.067549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:04.149401   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:04.149431   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:04.193745   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:04.193773   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:04.255156   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:04.255194   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:04.273611   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:04.273640   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:51:25.368122   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.094454524s)
	W0307 18:51:25.368169   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:25.368184   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:25.368198   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:25.400867   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:25.400894   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:25.431796   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:25.431828   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:25.487683   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:25.487715   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:28.026074   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:28.026610   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:28.241444   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:28.241526   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:28.274761   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:28.274787   26384 cri.go:87] found id: ""
	I0307 18:51:28.274794   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:28.274855   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.279831   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:28.279890   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:28.313516   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:28.313534   26384 cri.go:87] found id: ""
	I0307 18:51:28.313546   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:28.313588   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.317666   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:28.317719   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:28.347101   26384 cri.go:87] found id: ""
	I0307 18:51:28.347124   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.347131   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:28.347136   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:28.347198   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:28.378300   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:28.378320   26384 cri.go:87] found id: ""
	I0307 18:51:28.378326   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:28.378377   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.382695   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:28.382753   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:28.410959   26384 cri.go:87] found id: ""
	I0307 18:51:28.410981   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.410988   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:28.410995   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:28.411048   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:28.441806   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:28.441826   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:28.441833   26384 cri.go:87] found id: ""
	I0307 18:51:28.441842   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:28.441892   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.446211   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:28.450221   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:28.450282   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:28.483257   26384 cri.go:87] found id: ""
	I0307 18:51:28.483279   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.483286   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:28.483292   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:28.483358   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:28.510972   26384 cri.go:87] found id: ""
	I0307 18:51:28.510998   26384 logs.go:277] 0 containers: []
	W0307 18:51:28.511008   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:28.511026   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:28.511044   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:28.524745   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:28.524776   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:28.578288   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:28.578311   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:28.578323   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:28.611345   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:28.611382   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:28.683142   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:28.683180   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:28.713237   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:28.713266   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:28.751528   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:28.751554   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:28.789824   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:28.789849   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:28.849258   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:28.849288   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:28.881741   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:28.881766   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:31.435018   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:31.435708   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:31.741199   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:31.741275   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:31.775567   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:31.775595   26384 cri.go:87] found id: ""
	I0307 18:51:31.775603   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:31.775660   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.779786   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:31.779843   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:31.811197   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:31.811217   26384 cri.go:87] found id: ""
	I0307 18:51:31.811225   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:31.811279   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.815320   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:31.815380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:31.844870   26384 cri.go:87] found id: ""
	I0307 18:51:31.844898   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.844907   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:31.844915   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:31.844992   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:31.872742   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:31.872765   26384 cri.go:87] found id: ""
	I0307 18:51:31.872779   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:31.872834   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.876867   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:31.876935   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:31.903271   26384 cri.go:87] found id: ""
	I0307 18:51:31.903299   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.903306   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:31.903311   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:31.903361   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:31.930122   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:31.930143   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:31.930147   26384 cri.go:87] found id: ""
	I0307 18:51:31.930153   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:31.930194   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.933837   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:31.937392   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:31.937451   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:31.963795   26384 cri.go:87] found id: ""
	I0307 18:51:31.963818   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.963824   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:31.963830   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:31.963871   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:31.997078   26384 cri.go:87] found id: ""
	I0307 18:51:31.997101   26384 logs.go:277] 0 containers: []
	W0307 18:51:31.997107   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:31.997119   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:31.997133   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:32.085403   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:32.085436   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:32.115532   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:32.115557   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:32.171653   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:32.171688   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:32.204332   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:32.204361   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:32.216172   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:32.216197   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:32.266551   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:32.266575   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:32.266593   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:32.297132   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:32.297159   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:32.344077   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:32.344105   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:32.403948   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:32.403977   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:34.935152   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:34.935872   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:35.241335   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:35.241407   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:35.270388   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:35.270412   26384 cri.go:87] found id: ""
	I0307 18:51:35.270418   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:35.270468   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.275051   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:35.275114   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:35.304925   26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:35.304971   26384 cri.go:87] found id: ""
	I0307 18:51:35.304979   26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
	I0307 18:51:35.305030   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.308987   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:35.309043   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:35.334992   26384 cri.go:87] found id: ""
	I0307 18:51:35.335015   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.335024   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:35.335031   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:35.335078   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:35.363029   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:35.363054   26384 cri.go:87] found id: ""
	I0307 18:51:35.363062   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:35.363112   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.366976   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:35.367027   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:35.393011   26384 cri.go:87] found id: ""
	I0307 18:51:35.393033   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.393040   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:35.393046   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:35.393089   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:35.418706   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:35.418731   26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:35.418738   26384 cri.go:87] found id: ""
	I0307 18:51:35.418746   26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
	I0307 18:51:35.418795   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.422711   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:35.426344   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:35.426404   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:35.453517   26384 cri.go:87] found id: ""
	I0307 18:51:35.453540   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.453547   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:35.453552   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:35.453600   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:35.480473   26384 cri.go:87] found id: ""
	I0307 18:51:35.480506   26384 logs.go:277] 0 containers: []
	W0307 18:51:35.480535   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:35.480557   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:35.480572   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:35.514397   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:35.514430   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:35.553507   26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
	I0307 18:51:35.553543   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
	I0307 18:51:35.594291   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:35.594323   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:35.649916   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:35.649950   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:35.708932   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:35.708962   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:35.720655   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:35.720682   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:35.775147   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:35.775170   26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
	I0307 18:51:35.775185   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
	I0307 18:51:35.808353   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:35.808378   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:35.888351   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:35.888387   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:38.421085   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:38.421679   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:38.741179   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:38.741264   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:38.771512   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:38.771541   26384 cri.go:87] found id: ""
	I0307 18:51:38.771552   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:38.771608   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.775448   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:38.775518   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:38.803713   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:38.803738   26384 cri.go:87] found id: ""
	I0307 18:51:38.803746   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:38.803797   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.807432   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:38.807485   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:38.841539   26384 cri.go:87] found id: ""
	I0307 18:51:38.841564   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.841572   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:38.841580   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:38.841700   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:38.873163   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:38.873189   26384 cri.go:87] found id: ""
	I0307 18:51:38.873197   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:38.873244   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.876827   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:38.876887   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:38.904500   26384 cri.go:87] found id: ""
	I0307 18:51:38.904525   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.904535   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:38.904541   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:38.904605   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:38.933684   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:38.933703   26384 cri.go:87] found id: ""
	I0307 18:51:38.933708   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:38.933753   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:38.937611   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:38.937673   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:38.967298   26384 cri.go:87] found id: ""
	I0307 18:51:38.967317   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.967323   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:38.967329   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:38.967381   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:38.994836   26384 cri.go:87] found id: ""
	I0307 18:51:38.994857   26384 logs.go:277] 0 containers: []
	W0307 18:51:38.994864   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:38.994875   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:38.994885   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:39.013172   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:39.013202   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:39.050550   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:39.050577   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:39.081654   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:39.081686   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:39.122178   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:39.122206   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:39.157534   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:39.157558   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:39.215607   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:39.215638   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:39.270533   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:39.270555   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:39.270565   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:39.351014   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:39.351046   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:41.910810   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:41.911444   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:42.240866   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:42.240934   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:42.270659   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:42.270686   26384 cri.go:87] found id: ""
	I0307 18:51:42.270693   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:42.270744   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.274956   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:42.275009   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:42.302640   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:42.302659   26384 cri.go:87] found id: ""
	I0307 18:51:42.302666   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:42.302708   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.306628   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:42.306683   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:42.333725   26384 cri.go:87] found id: ""
	I0307 18:51:42.333744   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.333750   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:42.333757   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:42.333797   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:42.361433   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:42.361455   26384 cri.go:87] found id: ""
	I0307 18:51:42.361461   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:42.361525   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.365419   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:42.365475   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:42.390359   26384 cri.go:87] found id: ""
	I0307 18:51:42.390386   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.390394   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:42.390400   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:42.390466   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:42.418877   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:42.418900   26384 cri.go:87] found id: ""
	I0307 18:51:42.418909   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:42.418961   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:42.422852   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:42.422922   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:42.449901   26384 cri.go:87] found id: ""
	I0307 18:51:42.449937   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.449947   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:42.449953   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:42.450013   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:42.478218   26384 cri.go:87] found id: ""
	I0307 18:51:42.478243   26384 logs.go:277] 0 containers: []
	W0307 18:51:42.478251   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:42.478269   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:42.478286   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:42.506655   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:42.506700   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:42.582409   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:42.582444   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:42.615907   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:42.615931   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:42.657529   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:42.657560   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:42.712843   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:42.712871   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:42.745993   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:42.746017   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:42.808149   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:42.808182   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:42.820414   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:42.820435   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:42.873183   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:45.374057   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:45.374585   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:45.741047   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:45.741134   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:45.770908   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:45.770936   26384 cri.go:87] found id: ""
	I0307 18:51:45.770944   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:45.771001   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.775199   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:45.775271   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:45.804540   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:45.804560   26384 cri.go:87] found id: ""
	I0307 18:51:45.804567   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:45.804609   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.808609   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:45.808686   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:45.835602   26384 cri.go:87] found id: ""
	I0307 18:51:45.835627   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.835635   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:45.835643   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:45.835702   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:45.868007   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:45.868029   26384 cri.go:87] found id: ""
	I0307 18:51:45.868038   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:45.868098   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.872229   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:45.872288   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:45.900275   26384 cri.go:87] found id: ""
	I0307 18:51:45.900301   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.900310   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:45.900317   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:45.900380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:45.928163   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:45.928182   26384 cri.go:87] found id: ""
	I0307 18:51:45.928189   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:45.928248   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:45.932473   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:45.932532   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:45.961937   26384 cri.go:87] found id: ""
	I0307 18:51:45.961971   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.961982   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:45.961990   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:45.962041   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:45.991124   26384 cri.go:87] found id: ""
	I0307 18:51:45.991158   26384 logs.go:277] 0 containers: []
	W0307 18:51:45.991165   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:45.991178   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:45.991195   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:46.055916   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:46.055947   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:46.069670   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:46.069697   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:46.123987   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:46.124010   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:46.124024   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:46.158206   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:46.158235   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:46.234157   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:46.234188   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:46.277028   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:46.277054   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:46.331295   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:46.331325   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:46.369056   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:46.369081   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:48.902692   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:48.903509   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:49.240949   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:49.241016   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:49.270709   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:49.270735   26384 cri.go:87] found id: ""
	I0307 18:51:49.270744   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:49.270804   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.274731   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:49.274789   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:49.302081   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:49.302100   26384 cri.go:87] found id: ""
	I0307 18:51:49.302108   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:49.302166   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.306174   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:49.306234   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:49.333438   26384 cri.go:87] found id: ""
	I0307 18:51:49.333461   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.333468   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:49.333474   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:49.333527   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:49.365533   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:49.365562   26384 cri.go:87] found id: ""
	I0307 18:51:49.365569   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:49.365610   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.369216   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:49.369276   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:49.398301   26384 cri.go:87] found id: ""
	I0307 18:51:49.398326   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.398334   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:49.398341   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:49.398398   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:49.427703   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:49.427722   26384 cri.go:87] found id: ""
	I0307 18:51:49.427730   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:49.427774   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:49.431651   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:49.431702   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:49.462642   26384 cri.go:87] found id: ""
	I0307 18:51:49.462667   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.462674   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:49.462679   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:49.462729   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:49.489078   26384 cri.go:87] found id: ""
	I0307 18:51:49.489106   26384 logs.go:277] 0 containers: []
	W0307 18:51:49.489116   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:49.489129   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:49.489140   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:49.518966   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:49.518994   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:49.578313   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:49.578343   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:49.632259   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:49.632280   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:49.632292   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:49.665772   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:49.665797   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:49.745503   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:49.745534   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:49.785793   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:49.785819   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:49.821781   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:49.821843   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:49.888865   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:49.888906   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:52.403328   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:52.403890   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:52.741393   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:52.741477   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:52.770492   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:52.770514   26384 cri.go:87] found id: ""
	I0307 18:51:52.770520   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:52.770575   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.774281   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:52.774334   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:52.804403   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:52.804427   26384 cri.go:87] found id: ""
	I0307 18:51:52.804435   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:52.804480   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.808178   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:52.808226   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:52.836026   26384 cri.go:87] found id: ""
	I0307 18:51:52.836048   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.836055   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:52.836060   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:52.836118   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:52.867795   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:52.867824   26384 cri.go:87] found id: ""
	I0307 18:51:52.867834   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:52.867891   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.871532   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:52.871602   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:52.899536   26384 cri.go:87] found id: ""
	I0307 18:51:52.899558   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.899565   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:52.899570   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:52.899631   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:52.927081   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:52.927105   26384 cri.go:87] found id: ""
	I0307 18:51:52.927114   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:52.927170   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:52.930990   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:52.931056   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:52.961939   26384 cri.go:87] found id: ""
	I0307 18:51:52.961965   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.961973   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:52.961978   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:52.962025   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:52.990556   26384 cri.go:87] found id: ""
	I0307 18:51:52.990582   26384 logs.go:277] 0 containers: []
	W0307 18:51:52.990589   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:52.990602   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:52.990611   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:53.055863   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:53.055899   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:53.118674   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:53.118699   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:53.118712   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:53.160200   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:53.160226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:53.193132   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:53.193157   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:53.206488   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:53.206521   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:53.239547   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:53.239575   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:53.271150   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:53.271179   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:53.355907   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:53.355937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:55.915778   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:55.916343   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:56.240741   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:56.240815   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:56.276584   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:56.276609   26384 cri.go:87] found id: ""
	I0307 18:51:56.276616   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:56.276662   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.280478   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:56.280543   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:56.310551   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:56.310580   26384 cri.go:87] found id: ""
	I0307 18:51:56.310591   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:56.310652   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.314325   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:56.314380   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:56.345523   26384 cri.go:87] found id: ""
	I0307 18:51:56.345545   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.345555   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:56.345562   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:56.345613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:56.374295   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:56.374316   26384 cri.go:87] found id: ""
	I0307 18:51:56.374325   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:56.374369   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.377845   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:56.377893   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:56.407290   26384 cri.go:87] found id: ""
	I0307 18:51:56.407314   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.407323   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:56.407330   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:56.407387   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:56.434800   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:56.434822   26384 cri.go:87] found id: ""
	I0307 18:51:56.434831   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:56.434889   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:56.438706   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:56.438771   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:56.469291   26384 cri.go:87] found id: ""
	I0307 18:51:56.469321   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.469331   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:56.469338   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:56.469400   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:56.496682   26384 cri.go:87] found id: ""
	I0307 18:51:56.496707   26384 logs.go:277] 0 containers: []
	W0307 18:51:56.496716   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:56.496731   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:51:56.496749   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:51:56.558292   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:51:56.558324   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:51:56.616546   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:51:56.616566   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:51:56.616576   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:56.645444   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:56.645482   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:56.690522   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:51:56.690549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:51:56.729452   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:51:56.729480   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:51:56.741227   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:51:56.741250   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:56.774040   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:51:56.774069   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:56.851946   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:51:56.851980   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:51:59.410226   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:51:59.410809   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:51:59.741513   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:51:59.741583   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:51:59.770692   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:51:59.770715   26384 cri.go:87] found id: ""
	I0307 18:51:59.770723   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:51:59.770773   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.774597   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:51:59.774652   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:51:59.802266   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:51:59.802286   26384 cri.go:87] found id: ""
	I0307 18:51:59.802293   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:51:59.802330   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.805853   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:51:59.805892   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:51:59.833448   26384 cri.go:87] found id: ""
	I0307 18:51:59.833466   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.833473   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:51:59.833477   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:51:59.833517   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:51:59.864701   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:51:59.864723   26384 cri.go:87] found id: ""
	I0307 18:51:59.864732   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:51:59.864787   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.868622   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:51:59.868687   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:51:59.900470   26384 cri.go:87] found id: ""
	I0307 18:51:59.900500   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.900510   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:51:59.900518   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:51:59.900573   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:51:59.927551   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:51:59.927580   26384 cri.go:87] found id: ""
	I0307 18:51:59.927588   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:51:59.927633   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:51:59.931339   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:51:59.931393   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:51:59.959403   26384 cri.go:87] found id: ""
	I0307 18:51:59.959426   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.959436   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:51:59.959442   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:51:59.959484   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:51:59.987595   26384 cri.go:87] found id: ""
	I0307 18:51:59.987616   26384 logs.go:277] 0 containers: []
	W0307 18:51:59.987623   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:51:59.987637   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:51:59.987654   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:00.035743   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:00.035772   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:00.099440   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:00.099473   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:00.131520   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:00.131549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:00.208993   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:00.209030   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:00.267588   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:00.267622   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:00.301447   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:00.301476   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:00.313284   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:00.313307   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:00.368862   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:00.368881   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:00.368892   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:02.901502   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:02.902198   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:03.240812   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:03.240884   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:03.271596   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:03.271623   26384 cri.go:87] found id: ""
	I0307 18:52:03.271632   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:03.271693   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.276075   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:03.276140   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:03.306294   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:03.306321   26384 cri.go:87] found id: ""
	I0307 18:52:03.306329   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:03.306372   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.310127   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:03.310195   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:03.346928   26384 cri.go:87] found id: ""
	I0307 18:52:03.346956   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.346964   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:03.346970   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:03.347028   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:03.373901   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:03.373935   26384 cri.go:87] found id: ""
	I0307 18:52:03.373944   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:03.374004   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.377726   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:03.377816   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:03.408820   26384 cri.go:87] found id: ""
	I0307 18:52:03.408855   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.408862   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:03.408880   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:03.408938   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:03.437027   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:03.437049   26384 cri.go:87] found id: ""
	I0307 18:52:03.437060   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:03.437104   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:03.440989   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:03.441047   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:03.470590   26384 cri.go:87] found id: ""
	I0307 18:52:03.470614   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.470621   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:03.470627   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:03.470688   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:03.500217   26384 cri.go:87] found id: ""
	I0307 18:52:03.500244   26384 logs.go:277] 0 containers: []
	W0307 18:52:03.500252   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:03.500267   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:03.500280   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:03.566239   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:03.566268   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:03.625165   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:03.625184   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:03.625195   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:03.682195   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:03.682226   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:03.719700   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:03.719727   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:03.731216   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:03.731240   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:03.763196   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:03.763229   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:03.791661   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:03.791686   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:03.868166   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:03.868202   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:06.409727   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:06.410322   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:06.740737   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:06.740806   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:06.771108   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:06.771137   26384 cri.go:87] found id: ""
	I0307 18:52:06.771144   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:06.771189   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.775193   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:06.775250   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:06.806716   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:06.806737   26384 cri.go:87] found id: ""
	I0307 18:52:06.806746   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:06.806795   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.810459   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:06.810504   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:06.837774   26384 cri.go:87] found id: ""
	I0307 18:52:06.837797   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.837804   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:06.837809   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:06.837860   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:06.866218   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:06.866239   26384 cri.go:87] found id: ""
	I0307 18:52:06.866249   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:06.866303   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.869982   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:06.870039   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:06.899518   26384 cri.go:87] found id: ""
	I0307 18:52:06.899546   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.899556   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:06.899562   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:06.899617   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:06.927743   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:06.927770   26384 cri.go:87] found id: ""
	I0307 18:52:06.927778   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:06.927820   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:06.931549   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:06.931613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:06.961419   26384 cri.go:87] found id: ""
	I0307 18:52:06.961445   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.961452   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:06.961457   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:06.961518   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:06.989502   26384 cri.go:87] found id: ""
	I0307 18:52:06.989526   26384 logs.go:277] 0 containers: []
	W0307 18:52:06.989532   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:06.989546   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:06.989559   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:07.025827   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:07.025850   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:07.086485   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:07.086512   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:07.098772   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:07.098799   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:07.130198   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:07.130225   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:07.212261   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:07.212293   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:07.268115   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:07.268148   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:07.330511   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:07.330537   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:07.330549   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:07.362299   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:07.362331   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:09.904436   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:09.905035   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:10.241493   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:10.241591   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:10.270226   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:10.270250   26384 cri.go:87] found id: ""
	I0307 18:52:10.270259   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:10.270316   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.274003   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:10.274065   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:10.301912   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:10.301935   26384 cri.go:87] found id: ""
	I0307 18:52:10.301943   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:10.301995   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.305750   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:10.305809   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:10.333329   26384 cri.go:87] found id: ""
	I0307 18:52:10.333347   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.333356   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:10.333364   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:10.333415   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:10.365807   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:10.365830   26384 cri.go:87] found id: ""
	I0307 18:52:10.365837   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:10.365876   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.369503   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:10.369555   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:10.402354   26384 cri.go:87] found id: ""
	I0307 18:52:10.402382   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.402391   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:10.402398   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:10.402458   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:10.431242   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:10.431268   26384 cri.go:87] found id: ""
	I0307 18:52:10.431278   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:10.431331   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:10.435085   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:10.435150   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:10.462020   26384 cri.go:87] found id: ""
	I0307 18:52:10.462044   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.462053   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:10.462059   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:10.462117   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:10.492729   26384 cri.go:87] found id: ""
	I0307 18:52:10.492755   26384 logs.go:277] 0 containers: []
	W0307 18:52:10.492761   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:10.492776   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:10.492788   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:10.550753   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:10.550787   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:10.587328   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:10.587353   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:10.649658   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:10.649690   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:10.688111   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:10.688141   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:10.715243   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:10.715271   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:10.794097   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:10.794129   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:10.806313   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:10.806337   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:10.859925   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:10.859948   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:10.859957   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:13.412753   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:13.413326   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:13.740752   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:13.740822   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:13.769106   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:13.769130   26384 cri.go:87] found id: ""
	I0307 18:52:13.769139   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:13.769197   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.772932   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:13.772977   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:13.799190   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:13.799214   26384 cri.go:87] found id: ""
	I0307 18:52:13.799224   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:13.799272   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.803163   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:13.803229   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:13.829114   26384 cri.go:87] found id: ""
	I0307 18:52:13.829137   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.829143   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:13.829148   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:13.829215   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:13.860207   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:13.860232   26384 cri.go:87] found id: ""
	I0307 18:52:13.860241   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:13.860299   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.864306   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:13.864365   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:13.895421   26384 cri.go:87] found id: ""
	I0307 18:52:13.895447   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.895456   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:13.895464   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:13.895523   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:13.926222   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:13.926245   26384 cri.go:87] found id: ""
	I0307 18:52:13.926252   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:13.926301   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:13.930178   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:13.930235   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:13.954048   26384 cri.go:87] found id: ""
	I0307 18:52:13.954067   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.954073   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:13.954081   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:13.954137   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:13.982093   26384 cri.go:87] found id: ""
	I0307 18:52:13.982112   26384 logs.go:277] 0 containers: []
	W0307 18:52:13.982118   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:13.982130   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:13.982143   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:14.038975   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:14.038990   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:14.039000   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:14.090619   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:14.090645   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:14.148386   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:14.148418   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:14.209750   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:14.209782   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:14.222299   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:14.222320   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:14.259738   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:14.259764   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:14.288148   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:14.288183   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:14.364866   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:14.364898   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:16.896622   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:16.897179   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:17.241681   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:17.241765   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:17.270963   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:17.270985   26384 cri.go:87] found id: ""
	I0307 18:52:17.270994   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:17.271055   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.274819   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:17.274879   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:17.303431   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:17.303455   26384 cri.go:87] found id: ""
	I0307 18:52:17.303464   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:17.303516   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.307271   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:17.307316   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:17.336969   26384 cri.go:87] found id: ""
	I0307 18:52:17.336994   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.337002   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:17.337009   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:17.337061   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:17.364451   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:17.364476   26384 cri.go:87] found id: ""
	I0307 18:52:17.364484   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:17.364543   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.368076   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:17.368130   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:17.395637   26384 cri.go:87] found id: ""
	I0307 18:52:17.395660   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.395667   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:17.395672   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:17.395715   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:17.423253   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:17.423273   26384 cri.go:87] found id: ""
	I0307 18:52:17.423279   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:17.423321   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:17.427005   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:17.427060   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:17.454713   26384 cri.go:87] found id: ""
	I0307 18:52:17.454731   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.454736   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:17.454742   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:17.454784   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:17.486176   26384 cri.go:87] found id: ""
	I0307 18:52:17.486199   26384 logs.go:277] 0 containers: []
	W0307 18:52:17.486206   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:17.486219   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:17.486229   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:17.498032   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:17.498055   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:17.557073   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:17.557097   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:17.557110   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:17.594388   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:17.594418   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:17.620305   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:17.620338   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:17.702872   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:17.702904   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:17.759889   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:17.759926   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:17.817947   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:17.817980   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:17.865944   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:17.865973   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:20.398731   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:20.399378   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:20.740808   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:20.740889   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:20.774030   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:20.774056   26384 cri.go:87] found id: ""
	I0307 18:52:20.774066   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:20.774117   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.778074   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:20.778136   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:20.806773   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:20.806791   26384 cri.go:87] found id: ""
	I0307 18:52:20.806798   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:20.806846   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.810652   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:20.810700   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:20.838994   26384 cri.go:87] found id: ""
	I0307 18:52:20.839019   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.839029   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:20.839042   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:20.839102   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:20.869727   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:20.869748   26384 cri.go:87] found id: ""
	I0307 18:52:20.869756   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:20.869812   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.873736   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:20.873793   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:20.901823   26384 cri.go:87] found id: ""
	I0307 18:52:20.901844   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.901851   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:20.901857   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:20.901929   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:20.934273   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:20.934298   26384 cri.go:87] found id: ""
	I0307 18:52:20.934306   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:20.934356   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:20.938406   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:20.938472   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:20.969450   26384 cri.go:87] found id: ""
	I0307 18:52:20.969479   26384 logs.go:277] 0 containers: []
	W0307 18:52:20.969486   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:20.969492   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:20.969541   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:21.001492   26384 cri.go:87] found id: ""
	I0307 18:52:21.001514   26384 logs.go:277] 0 containers: []
	W0307 18:52:21.001521   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:21.001534   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:21.001548   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:21.054970   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:21.054986   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:21.054995   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:21.088359   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:21.088383   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:21.120677   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:21.120706   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:21.182999   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:21.183047   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:21.245976   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:21.246016   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:21.346906   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:21.346937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:21.395390   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:21.395425   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:21.428290   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:21.428320   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:23.941739   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:23.942328   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:24.240694   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:24.240774   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:24.270200   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:24.270223   26384 cri.go:87] found id: ""
	I0307 18:52:24.270230   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:24.270277   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.274395   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:24.274459   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:24.305875   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:24.305898   26384 cri.go:87] found id: ""
	I0307 18:52:24.305919   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:24.305974   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.309735   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:24.309791   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:24.336466   26384 cri.go:87] found id: ""
	I0307 18:52:24.336484   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.336493   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:24.336499   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:24.336550   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:24.364312   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:24.364337   26384 cri.go:87] found id: ""
	I0307 18:52:24.364347   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:24.364398   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.368537   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:24.368610   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:24.399307   26384 cri.go:87] found id: ""
	I0307 18:52:24.399333   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.399343   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:24.399350   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:24.399410   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:24.428137   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:24.428157   26384 cri.go:87] found id: ""
	I0307 18:52:24.428165   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:24.428220   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:24.432114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:24.432177   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:24.458423   26384 cri.go:87] found id: ""
	I0307 18:52:24.458443   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.458452   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:24.458458   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:24.458507   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:24.486856   26384 cri.go:87] found id: ""
	I0307 18:52:24.486881   26384 logs.go:277] 0 containers: []
	W0307 18:52:24.486889   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:24.486907   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:24.486920   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:24.568604   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:24.568635   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:24.609771   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:24.609802   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:24.665713   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:24.665734   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:24.665752   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:24.691910   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:24.691937   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:24.723832   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:24.723860   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:24.764806   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:24.764833   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:24.821496   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:24.821529   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:24.880200   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:24.880230   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:27.393632   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:27.394219   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:27.741710   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:27.741782   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:27.770323   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:27.770343   26384 cri.go:87] found id: ""
	I0307 18:52:27.770349   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:27.770405   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.774285   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:27.774345   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:27.800912   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:27.800933   26384 cri.go:87] found id: ""
	I0307 18:52:27.800942   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:27.800991   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.804444   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:27.804490   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:27.836265   26384 cri.go:87] found id: ""
	I0307 18:52:27.836290   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.836297   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:27.836303   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:27.836359   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:27.865231   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:27.865260   26384 cri.go:87] found id: ""
	I0307 18:52:27.865269   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:27.865317   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.869523   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:27.869586   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:27.900740   26384 cri.go:87] found id: ""
	I0307 18:52:27.900770   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.900780   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:27.900787   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:27.900849   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:27.929343   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:27.929371   26384 cri.go:87] found id: ""
	I0307 18:52:27.929381   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:27.929440   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:27.933280   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:27.933348   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:27.966078   26384 cri.go:87] found id: ""
	I0307 18:52:27.966104   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.966111   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:27.966119   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:27.966175   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:27.994539   26384 cri.go:87] found id: ""
	I0307 18:52:27.994562   26384 logs.go:277] 0 containers: []
	W0307 18:52:27.994568   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:27.994581   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:27.994591   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:28.026948   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:28.026989   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:28.039179   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:28.039208   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:28.094604   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:28.094626   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:28.094637   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:28.134457   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:28.134490   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:28.190768   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:28.192394   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:28.251450   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:28.251489   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:28.285082   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:28.285108   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:28.316724   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:28.316750   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:30.901642   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:30.902211   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:31.241667   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:31.241736   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:31.271253   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:31.271279   26384 cri.go:87] found id: ""
	I0307 18:52:31.271288   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:31.271343   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.275766   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:31.275822   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:31.304092   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:31.304115   26384 cri.go:87] found id: ""
	I0307 18:52:31.304121   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:31.304161   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.307829   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:31.307887   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:31.336157   26384 cri.go:87] found id: ""
	I0307 18:52:31.336184   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.336193   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:31.336201   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:31.336266   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:31.362407   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:31.362427   26384 cri.go:87] found id: ""
	I0307 18:52:31.362433   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:31.362484   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.366267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:31.366323   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:31.392005   26384 cri.go:87] found id: ""
	I0307 18:52:31.392031   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.392040   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:31.392047   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:31.392107   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:31.417145   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:31.417164   26384 cri.go:87] found id: ""
	I0307 18:52:31.417170   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:31.417226   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:31.421051   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:31.421093   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:31.452946   26384 cri.go:87] found id: ""
	I0307 18:52:31.452966   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.452973   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:31.452991   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:31.453072   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:31.482025   26384 cri.go:87] found id: ""
	I0307 18:52:31.482048   26384 logs.go:277] 0 containers: []
	W0307 18:52:31.482058   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:31.482075   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:31.482094   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:31.535162   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:31.535180   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:31.535190   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:31.575114   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:31.575149   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:31.630597   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:31.630629   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:31.689816   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:31.689854   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:31.703439   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:31.703465   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:31.733755   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:31.733789   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:31.761485   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:31.761517   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:31.849205   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:31.849238   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:34.397092   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:34.399029   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:34.740924   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:52:34.741012   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:52:34.768741   26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:34.768769   26384 cri.go:87] found id: ""
	I0307 18:52:34.768776   26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
	I0307 18:52:34.768826   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.772560   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:52:34.772608   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:52:34.801197   26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:34.801219   26384 cri.go:87] found id: ""
	I0307 18:52:34.801226   26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
	I0307 18:52:34.801268   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.805070   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:52:34.805123   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:52:34.841217   26384 cri.go:87] found id: ""
	I0307 18:52:34.841245   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.841258   26384 logs.go:279] No container was found matching "coredns"
	I0307 18:52:34.841267   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:52:34.841329   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:52:34.878585   26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:34.878643   26384 cri.go:87] found id: ""
	I0307 18:52:34.878663   26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
	I0307 18:52:34.878720   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.882566   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:52:34.882625   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:52:34.909524   26384 cri.go:87] found id: ""
	I0307 18:52:34.909550   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.909557   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 18:52:34.909565   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:52:34.909613   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:52:34.936954   26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:34.936975   26384 cri.go:87] found id: ""
	I0307 18:52:34.936983   26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
	I0307 18:52:34.937053   26384 ssh_runner.go:195] Run: which crictl
	I0307 18:52:34.941502   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:52:34.941564   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:52:34.971973   26384 cri.go:87] found id: ""
	I0307 18:52:34.971995   26384 logs.go:277] 0 containers: []
	W0307 18:52:34.972004   26384 logs.go:279] No container was found matching "kindnet"
	I0307 18:52:34.972011   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:52:34.972070   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:52:35.003175   26384 cri.go:87] found id: ""
	I0307 18:52:35.003199   26384 logs.go:277] 0 containers: []
	W0307 18:52:35.003206   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 18:52:35.003221   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:52:35.003233   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 18:52:35.057263   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 18:52:35.057287   26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
	I0307 18:52:35.057300   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
	I0307 18:52:35.093840   26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
	I0307 18:52:35.093865   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
	I0307 18:52:35.131551   26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
	I0307 18:52:35.131580   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
	I0307 18:52:35.213034   26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
	I0307 18:52:35.213066   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
	I0307 18:52:35.250410   26384 logs.go:123] Gathering logs for containerd ...
	I0307 18:52:35.250442   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:52:35.305928   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 18:52:35.305959   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 18:52:35.366041   26384 logs.go:123] Gathering logs for container status ...
	I0307 18:52:35.366074   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:52:35.411044   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 18:52:35.411068   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:52:37.924460   26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0307 18:52:37.925115   26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
	I0307 18:52:38.240997   26384 kubeadm.go:637] restartCluster took 4m28.730822487s
	W0307 18:52:38.241143   26384 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	I0307 18:52:38.241176   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0307 18:52:39.540779   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.299584283s)
	I0307 18:52:39.540844   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:52:39.554353   26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 18:52:39.563539   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:52:39.572536   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:52:39.572574   26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 18:52:39.609552   26384 kubeadm.go:322] W0307 18:52:39.601196    5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0307 18:52:39.746961   26384 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 18:56:41.125984   26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0307 18:56:41.126127   26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0307 18:56:41.127655   26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
	I0307 18:56:41.127696   26384 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 18:56:41.127765   26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 18:56:41.127875   26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 18:56:41.127983   26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 18:56:41.128061   26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 18:56:41.130326   26384 out.go:204]   - Generating certificates and keys ...
	I0307 18:56:41.130393   26384 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 18:56:41.130451   26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 18:56:41.130531   26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 18:56:41.130620   26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0307 18:56:41.130718   26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 18:56:41.130787   26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0307 18:56:41.130866   26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0307 18:56:41.130953   26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0307 18:56:41.131049   26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 18:56:41.131155   26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 18:56:41.131217   26384 kubeadm.go:322] [certs] Using the existing "sa" key
	I0307 18:56:41.131292   26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 18:56:41.131363   26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 18:56:41.131434   26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 18:56:41.131523   26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 18:56:41.131603   26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 18:56:41.131688   26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 18:56:41.131762   26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 18:56:41.131795   26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 18:56:41.131852   26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 18:56:41.133514   26384 out.go:204]   - Booting up control plane ...
	I0307 18:56:41.133618   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 18:56:41.133699   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 18:56:41.133776   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 18:56:41.133863   26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 18:56:41.134051   26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 18:56:41.134110   26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0307 18:56:41.134119   26384 kubeadm.go:322] 
	I0307 18:56:41.134162   26384 kubeadm.go:322] Unfortunately, an error has occurred:
	I0307 18:56:41.134218   26384 kubeadm.go:322] 	timed out waiting for the condition
	I0307 18:56:41.134224   26384 kubeadm.go:322] 
	I0307 18:56:41.134270   26384 kubeadm.go:322] This error is likely caused by:
	I0307 18:56:41.134347   26384 kubeadm.go:322] 	- The kubelet is not running
	I0307 18:56:41.134504   26384 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0307 18:56:41.134517   26384 kubeadm.go:322] 
	I0307 18:56:41.134650   26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0307 18:56:41.134698   26384 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0307 18:56:41.134741   26384 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0307 18:56:41.134760   26384 kubeadm.go:322] 
	I0307 18:56:41.134863   26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0307 18:56:41.134935   26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0307 18:56:41.135037   26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0307 18:56:41.135174   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0307 18:56:41.135274   26384 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0307 18:56:41.135447   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0307 18:56:41.135604   26384 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:52:39.601196    5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0307 18:56:41.135655   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0307 18:56:42.416834   26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.281155319s)
	I0307 18:56:42.416897   26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:56:42.431050   26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 18:56:42.440667   26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 18:56:42.440700   26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 18:56:42.477411   26384 kubeadm.go:322] W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0307 18:56:42.627046   26384 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 19:00:43.649484   26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0307 19:00:43.649599   26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0307 19:00:43.651218   26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
	I0307 19:00:43.651271   26384 kubeadm.go:322] [preflight] Running pre-flight checks
	I0307 19:00:43.651420   26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 19:00:43.651548   26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 19:00:43.651725   26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 19:00:43.651796   26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 19:00:43.654219   26384 out.go:204]   - Generating certificates and keys ...
	I0307 19:00:43.654288   26384 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0307 19:00:43.654338   26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0307 19:00:43.654403   26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 19:00:43.654458   26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0307 19:00:43.654514   26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0307 19:00:43.654563   26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0307 19:00:43.654618   26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0307 19:00:43.654668   26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0307 19:00:43.654730   26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 19:00:43.654798   26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 19:00:43.654859   26384 kubeadm.go:322] [certs] Using the existing "sa" key
	I0307 19:00:43.654935   26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 19:00:43.654978   26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 19:00:43.655070   26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 19:00:43.655168   26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 19:00:43.655220   26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 19:00:43.655347   26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 19:00:43.655430   26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 19:00:43.655465   26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0307 19:00:43.655523   26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 19:00:43.657162   26384 out.go:204]   - Booting up control plane ...
	I0307 19:00:43.657245   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 19:00:43.657351   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 19:00:43.657442   26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 19:00:43.657533   26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 19:00:43.657658   26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 19:00:43.657699   26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0307 19:00:43.657705   26384 kubeadm.go:322] 
	I0307 19:00:43.657736   26384 kubeadm.go:322] Unfortunately, an error has occurred:
	I0307 19:00:43.657782   26384 kubeadm.go:322] 	timed out waiting for the condition
	I0307 19:00:43.657789   26384 kubeadm.go:322] 
	I0307 19:00:43.657829   26384 kubeadm.go:322] This error is likely caused by:
	I0307 19:00:43.657862   26384 kubeadm.go:322] 	- The kubelet is not running
	I0307 19:00:43.657966   26384 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0307 19:00:43.657977   26384 kubeadm.go:322] 
	I0307 19:00:43.658062   26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0307 19:00:43.658091   26384 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0307 19:00:43.658134   26384 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0307 19:00:43.658142   26384 kubeadm.go:322] 
	I0307 19:00:43.658255   26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0307 19:00:43.658393   26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0307 19:00:43.658480   26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0307 19:00:43.658603   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0307 19:00:43.658702   26384 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0307 19:00:43.658828   26384 kubeadm.go:322] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0307 19:00:43.658871   26384 kubeadm.go:403] StartCluster complete in 12m34.187466467s
	I0307 19:00:43.658927   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 19:00:43.658974   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 19:00:43.701064   26384 cri.go:87] found id: "4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
	I0307 19:00:43.701086   26384 cri.go:87] found id: ""
	I0307 19:00:43.701098   26384 logs.go:277] 1 containers: [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc]
	I0307 19:00:43.701142   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.705362   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 19:00:43.705417   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 19:00:43.734452   26384 cri.go:87] found id: "c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
	I0307 19:00:43.734469   26384 cri.go:87] found id: ""
	I0307 19:00:43.734476   26384 logs.go:277] 1 containers: [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56]
	I0307 19:00:43.734531   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.739954   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 19:00:43.740015   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 19:00:43.766381   26384 cri.go:87] found id: ""
	I0307 19:00:43.766402   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.766408   26384 logs.go:279] No container was found matching "coredns"
	I0307 19:00:43.766413   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 19:00:43.766453   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 19:00:43.796840   26384 cri.go:87] found id: "1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
	I0307 19:00:43.796867   26384 cri.go:87] found id: ""
	I0307 19:00:43.796875   26384 logs.go:277] 1 containers: [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41]
	I0307 19:00:43.796929   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.801100   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 19:00:43.801154   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 19:00:43.830552   26384 cri.go:87] found id: ""
	I0307 19:00:43.830577   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.830584   26384 logs.go:279] No container was found matching "kube-proxy"
	I0307 19:00:43.830589   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 19:00:43.830637   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 19:00:43.867303   26384 cri.go:87] found id: "8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
	I0307 19:00:43.867324   26384 cri.go:87] found id: ""
	I0307 19:00:43.867331   26384 logs.go:277] 1 containers: [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06]
	I0307 19:00:43.867370   26384 ssh_runner.go:195] Run: which crictl
	I0307 19:00:43.871114   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 19:00:43.871164   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 19:00:43.904677   26384 cri.go:87] found id: ""
	I0307 19:00:43.904703   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.904709   26384 logs.go:279] No container was found matching "kindnet"
	I0307 19:00:43.904715   26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 19:00:43.904758   26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 19:00:43.944324   26384 cri.go:87] found id: ""
	I0307 19:00:43.944349   26384 logs.go:277] 0 containers: []
	W0307 19:00:43.944359   26384 logs.go:279] No container was found matching "storage-provisioner"
	I0307 19:00:43.944378   26384 logs.go:123] Gathering logs for containerd ...
	I0307 19:00:43.944395   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 19:00:44.011972   26384 logs.go:123] Gathering logs for kubelet ...
	I0307 19:00:44.012003   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 19:00:44.077224   26384 logs.go:123] Gathering logs for dmesg ...
	I0307 19:00:44.077258   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 19:00:44.091281   26384 logs.go:123] Gathering logs for describe nodes ...
	I0307 19:00:44.091305   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0307 19:00:44.158036   26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0307 19:00:44.158054   26384 logs.go:123] Gathering logs for etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] ...
	I0307 19:00:44.158065   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
	I0307 19:00:44.193518   26384 logs.go:123] Gathering logs for kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] ...
	I0307 19:00:44.193546   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
	I0307 19:00:44.281107   26384 logs.go:123] Gathering logs for kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] ...
	I0307 19:00:44.281138   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
	I0307 19:00:44.321328   26384 logs.go:123] Gathering logs for kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] ...
	I0307 19:00:44.321353   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
	I0307 19:00:44.370028   26384 logs.go:123] Gathering logs for container status ...
	I0307 19:00:44.370058   26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0307 19:00:44.410088   26384 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0307 19:00:44.410135   26384 out.go:239] * 
	W0307 19:00:44.410302   26384 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0307 19:00:44.410323   26384 out.go:239] * 
	W0307 19:00:44.411225   26384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 19:00:44.414682   26384 out.go:177] 
	W0307 19:00:44.416349   26384 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.4
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0307 18:56:42.461556    7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0307 19:00:44.416447   26384 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0307 19:00:44.416516   26384 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0307 19:00:44.419274   26384 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8f74b327d355b       1f99cb6da9a82       About a minute ago   Exited              kube-controller-manager   15                  43e8a5d7973c1
	c6ea84a251b2a       aebe758cef4cd       About a minute ago   Exited              etcd                      17                  6336c6d20265b
	4c3f077f022bd       6cab9d1bed1be       About a minute ago   Exited              kube-apiserver            14                  a48cee835eb73
	1d5f6f3ec60ee       03fa22539fc1c       4 minutes ago        Running             kube-scheduler            3                   a639f60172172
	
	* 
	* ==> containerd <==
	* -- Journal begins at Tue 2023-03-07 18:47:44 UTC, ends at Tue 2023-03-07 19:00:45 UTC. --
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.352629573Z" level=warning msg="cleaning up after shim disconnected" id=c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56 namespace=k8s.io
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.352677227Z" level=info msg="cleaning up dead shim"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.367324561Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:59:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8153 runtime=io.containerd.runc.v2\ntime=\"2023-03-07T18:59:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.367613601Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.368034875Z" level=error msg="Failed to pipe stdout of container \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\"" error="reading from a closed fifo"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.369103455Z" level=error msg="Failed to pipe stderr of container \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\"" error="reading from a closed fifo"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.374664547Z" level=error msg="StartContainer for \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: \"etcd\": executable file not found in $PATH: unknown"
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.502502972Z" level=info msg="RemoveContainer for \"f3ca8f12165168ac992c4913fc9ad7f88f5bbbd04ae7a7460359a1cdec15f0d2\""
	Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.511763714Z" level=info msg="RemoveContainer for \"f3ca8f12165168ac992c4913fc9ad7f88f5bbbd04ae7a7460359a1cdec15f0d2\" returns successfully"
	Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.969321507Z" level=info msg="CreateContainer within sandbox \"43e8a5d7973c13866b592527eea80575bff1fcfbd65b345924df45a4e2137ade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:15,}"
	Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.992714494Z" level=info msg="CreateContainer within sandbox \"43e8a5d7973c13866b592527eea80575bff1fcfbd65b345924df45a4e2137ade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:15,} returns container id \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\""
	Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.993581903Z" level=info msg="StartContainer for \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\""
	Mar 07 18:59:45 test-preload-203208 containerd[632]: time="2023-03-07T18:59:45.330149372Z" level=info msg="StartContainer for \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\" returns successfully"
	Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925430491Z" level=info msg="shim disconnected" id=4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc
	Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925556054Z" level=warning msg="cleaning up after shim disconnected" id=4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc namespace=k8s.io
	Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925568402Z" level=info msg="cleaning up dead shim"
	Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.938637174Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:59:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8216 runtime=io.containerd.runc.v2\n"
	Mar 07 18:59:52 test-preload-203208 containerd[632]: time="2023-03-07T18:59:52.527968449Z" level=info msg="RemoveContainer for \"16b2d8e8669683f1b0ae8136038cd8f61eb5d0c9ba63472d90cc6dbc04d1edef\""
	Mar 07 18:59:52 test-preload-203208 containerd[632]: time="2023-03-07T18:59:52.534435720Z" level=info msg="RemoveContainer for \"16b2d8e8669683f1b0ae8136038cd8f61eb5d0c9ba63472d90cc6dbc04d1edef\" returns successfully"
	Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935817898Z" level=info msg="shim disconnected" id=8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06
	Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935938523Z" level=warning msg="cleaning up after shim disconnected" id=8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06 namespace=k8s.io
	Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935952631Z" level=info msg="cleaning up dead shim"
	Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.951309595Z" level=warning msg="cleanup warnings time=\"2023-03-07T19:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8245 runtime=io.containerd.runc.v2\n"
	Mar 07 19:00:03 test-preload-203208 containerd[632]: time="2023-03-07T19:00:03.555081443Z" level=info msg="RemoveContainer for \"402b33a0acb4db523599b4b0c7a961bf445a627e88ad8730be8d0e408479454f\""
	Mar 07 19:00:03 test-preload-203208 containerd[632]: time="2023-03-07T19:00:03.560489063Z" level=info msg="RemoveContainer for \"402b33a0acb4db523599b4b0c7a961bf445a627e88ad8730be8d0e408479454f\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Mar 7 18:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069940] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.931123] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.246595] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147269] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.398341] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 7 18:48] systemd-fstab-generator[529]: Ignoring "noauto" for root device
	[  +2.816811] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.104429] systemd-fstab-generator[572]: Ignoring "noauto" for root device
	[  +0.137298] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[  +0.103680] systemd-fstab-generator[596]: Ignoring "noauto" for root device
	[  +0.237292] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[ +13.571443] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[Mar 7 18:52] systemd-fstab-generator[5678]: Ignoring "noauto" for root device
	[Mar 7 18:56] systemd-fstab-generator[7151]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] <==
	* 
	* 
	* ==> kernel <==
	*  19:00:45 up 13 min,  0 users,  load average: 0.16, 0.27, 0.16
	Linux test-preload-203208 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] <==
	* I0307 18:59:31.397729       1 server.go:558] external host was not specified, using 192.168.39.212
	I0307 18:59:31.398685       1 server.go:158] Version: v1.24.4
	I0307 18:59:31.398771       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:59:31.880843       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0307 18:59:31.882238       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0307 18:59:31.882250       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0307 18:59:31.883546       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0307 18:59:31.883559       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0307 18:59:31.886102       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:32.881739       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:32.886774       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:33.882167       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:34.739525       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:35.553368       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:37.264279       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:38.341743       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:40.891684       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:41.954797       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:48.013588       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0307 18:59:48.078321       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0307 18:59:51.886026       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] <==
	* 	vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x68e500?, {0x4d010e0, 0xc001023260}, 0x1, 0xc0000dc7e0)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00012e008?, 0xdf8475800, 0x0, 0x80?, 0xc0003aede0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0xc000101860?, 0x0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
	
	goroutine 148 [syscall]:
	syscall.Syscall6(0xe8, 0xe, 0xc00108fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
		/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
	k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x0?, {0xc00108fc14?, 0x0?, 0x0?}, 0x0?)
		vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0002000e0)
		vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000357220)
		vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
	created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
		vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
	
	* 
	* ==> kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] <==
	* E0307 18:59:53.386448       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 18:59:57.386066       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.212:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 18:59:57.386144       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.212:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:00.576353       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.212:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:00.576441       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.212:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:21.817814       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.39.212:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:21.818034       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.212:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:22.223130       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:22.223206       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:24.282317       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:24.282410       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:27.386083       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.212:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:27.386155       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.212:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:27.715416       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:27.715477       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:29.333542       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.39.212:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:29.333615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.212:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:37.541286       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:37.541371       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:38.660515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.212:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:38.660564       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.212:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:39.252011       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:39.252093       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	W0307 19:00:39.383693       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.212:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	E0307 19:00:39.383775       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.212:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-03-07 18:47:44 UTC, ends at Tue 2023-03-07 19:00:45 UTC. --
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.029488    7157 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.113039    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.213810    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.314784    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.415438    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.515835    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: W0307 19:00:44.562175    7157 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)test-preload-203208&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.562355    7157 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)test-preload-203208&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.617031    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.717396    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.818448    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.918996    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: I0307 19:00:44.966047    7157 scope.go:110] "RemoveContainer" containerID="c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: I0307 19:00:44.966085    7157 scope.go:110] "RemoveContainer" containerID="8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.966373    7157 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-test-preload-203208_kube-system(15302bf5fc252d83d35e6df26d8799f5)\"" pod="kube-system/kube-controller-manager-test-preload-203208" podUID=15302bf5fc252d83d35e6df26d8799f5
	Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.966370    7157 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-203208_kube-system(6bf068956ab0be326534b38dbab322fb)\"" pod="kube-system/etcd-test-preload-203208" podUID=6bf068956ab0be326534b38dbab322fb
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.019222    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.120032    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.221223    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.321971    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.422576    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.523238    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.623360    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.724364    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.825139    7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 19:00:45.669001   26792 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-203208 -n test-preload-203208
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-203208 -n test-preload-203208: exit status 2 (221.514824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-203208" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-203208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-203208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-203208: (1.196438525s)
--- FAIL: TestPreload (1036.22s)

                                                
                                    

Test pass (262/297)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 34.54
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.2/json-events 27.23
11 TestDownloadOnly/v1.26.2/preload-exists 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
19 TestBinaryMirror 0.64
20 TestOffline 98.71
22 TestAddons/Setup 147.53
24 TestAddons/parallel/Registry 20.54
25 TestAddons/parallel/Ingress 25.97
26 TestAddons/parallel/MetricsServer 5.71
27 TestAddons/parallel/HelmTiller 18.68
29 TestAddons/parallel/CSI 57.98
30 TestAddons/parallel/Headlamp 13.07
31 TestAddons/parallel/CloudSpanner 5.44
34 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/StoppedEnableDisable 92.01
36 TestCertOptions 77.19
37 TestCertExpiration 254.63
39 TestForceSystemdFlag 82.27
40 TestForceSystemdEnv 58.13
41 TestKVMDriverInstallOrUpdate 13.92
45 TestErrorSpam/setup 55.66
46 TestErrorSpam/start 0.36
47 TestErrorSpam/status 0.73
48 TestErrorSpam/pause 1.39
49 TestErrorSpam/unpause 1.49
50 TestErrorSpam/stop 2.55
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 107.78
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 6.17
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 14.69
62 TestFunctional/serial/CacheCmd/cache/add_local 3.22
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
66 TestFunctional/serial/CacheCmd/cache/cache_reload 4.07
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
70 TestFunctional/serial/ExtraConfig 38.46
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.37
73 TestFunctional/serial/LogsFileCmd 1.3
75 TestFunctional/parallel/ConfigCmd 0.35
76 TestFunctional/parallel/DashboardCmd 24.66
77 TestFunctional/parallel/DryRun 0.27
78 TestFunctional/parallel/InternationalLanguage 0.14
79 TestFunctional/parallel/StatusCmd 0.81
83 TestFunctional/parallel/ServiceCmdConnect 7.57
84 TestFunctional/parallel/AddonsCmd 0.18
85 TestFunctional/parallel/PersistentVolumeClaim 52.08
87 TestFunctional/parallel/SSHCmd 0.41
88 TestFunctional/parallel/CpCmd 0.89
89 TestFunctional/parallel/MySQL 26.89
90 TestFunctional/parallel/FileSync 0.22
91 TestFunctional/parallel/CertSync 1.37
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
99 TestFunctional/parallel/License 0.28
100 TestFunctional/parallel/ServiceCmd/DeployApp 14.19
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
110 TestFunctional/parallel/ProfileCmd/profile_list 0.35
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
112 TestFunctional/parallel/MountCmd/any-port 13.51
113 TestFunctional/parallel/Version/short 0.07
114 TestFunctional/parallel/Version/components 1.19
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.47
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.47
120 TestFunctional/parallel/ImageCommands/Setup 1.79
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.05
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.87
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.24
124 TestFunctional/parallel/ServiceCmd/List 0.3
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
127 TestFunctional/parallel/ServiceCmd/Format 0.35
128 TestFunctional/parallel/MountCmd/specific-port 1.92
129 TestFunctional/parallel/ServiceCmd/URL 0.32
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.3
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.29
137 TestFunctional/delete_addon-resizer_images 0.16
138 TestFunctional/delete_my-image_image 0.06
139 TestFunctional/delete_minikube_cached_images 0.06
143 TestIngressAddonLegacy/StartLegacyK8sCluster 100.44
145 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.81
146 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.4
147 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.8
150 TestJSONOutput/start/Command 71.45
151 TestJSONOutput/start/Audit 0
153 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/pause/Command 0.61
157 TestJSONOutput/pause/Audit 0
159 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/unpause/Command 0.58
163 TestJSONOutput/unpause/Audit 0
165 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/stop/Command 7.09
169 TestJSONOutput/stop/Audit 0
171 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
173 TestErrorJSONOutput 0.43
178 TestMainNoArgs 0.05
179 TestMinikubeProfile 112.96
182 TestMountStart/serial/StartWithMountFirst 28.73
183 TestMountStart/serial/VerifyMountFirst 0.38
184 TestMountStart/serial/StartWithMountSecond 32.32
185 TestMountStart/serial/VerifyMountSecond 0.54
186 TestMountStart/serial/DeleteFirst 0.84
187 TestMountStart/serial/VerifyMountPostDelete 0.38
188 TestMountStart/serial/Stop 1.12
189 TestMountStart/serial/RestartStopped 23.31
190 TestMountStart/serial/VerifyMountPostStop 0.36
193 TestMultiNode/serial/FreshStart2Nodes 152.1
194 TestMultiNode/serial/DeployApp2Nodes 5.64
195 TestMultiNode/serial/PingHostFrom2Pods 0.85
196 TestMultiNode/serial/AddNode 71.06
197 TestMultiNode/serial/ProfileList 0.25
198 TestMultiNode/serial/CopyFile 7.32
199 TestMultiNode/serial/StopNode 2.11
200 TestMultiNode/serial/StartAfterStop 119.81
201 TestMultiNode/serial/RestartKeepsNodes 548.01
202 TestMultiNode/serial/DeleteNode 2.1
203 TestMultiNode/serial/StopMultiNode 183.43
204 TestMultiNode/serial/RestartMultiNode 237.96
205 TestMultiNode/serial/ValidateNameConflict 54.83
212 TestScheduledStopUnix 129.87
216 TestRunningBinaryUpgrade 242.01
218 TestKubernetesUpgrade 239.91
221 TestPause/serial/Start 81.62
223 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
224 TestNoKubernetes/serial/StartWithK8s 134.91
225 TestPause/serial/SecondStartNoReconfiguration 26.62
226 TestStoppedBinaryUpgrade/Setup 2.57
227 TestStoppedBinaryUpgrade/Upgrade 265.06
228 TestPause/serial/Pause 1.02
229 TestPause/serial/VerifyStatus 0.26
230 TestPause/serial/Unpause 0.66
231 TestPause/serial/PauseAgain 0.85
232 TestPause/serial/DeletePaused 1.12
233 TestPause/serial/VerifyDeletedResources 0.42
234 TestNoKubernetes/serial/StartWithStopK8s 38.09
235 TestNoKubernetes/serial/Start 29.27
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
237 TestNoKubernetes/serial/ProfileList 1.33
238 TestNoKubernetes/serial/Stop 1.29
239 TestNoKubernetes/serial/StartNoArgs 49.63
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
255 TestNetworkPlugins/group/false 3.45
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
261 TestStartStop/group/old-k8s-version/serial/FirstStart 153.06
263 TestStartStop/group/no-preload/serial/FirstStart 151.47
265 TestStartStop/group/embed-certs/serial/FirstStart 126.12
267 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 108.64
268 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
269 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
270 TestStartStop/group/old-k8s-version/serial/Stop 102.21
271 TestStartStop/group/no-preload/serial/DeployApp 11.38
272 TestStartStop/group/embed-certs/serial/DeployApp 9.45
273 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
274 TestStartStop/group/embed-certs/serial/Stop 92.26
275 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
276 TestStartStop/group/no-preload/serial/Stop 92.04
277 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.42
278 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
279 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.93
280 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
281 TestStartStop/group/old-k8s-version/serial/SecondStart 397.14
282 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
283 TestStartStop/group/embed-certs/serial/SecondStart 443.98
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
285 TestStartStop/group/no-preload/serial/SecondStart 607.3
286 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
287 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 716.15
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
291 TestStartStop/group/old-k8s-version/serial/Pause 2.49
293 TestStartStop/group/newest-cni/serial/FirstStart 70.45
294 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.02
295 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
296 TestStartStop/group/newest-cni/serial/DeployApp 0
297 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
298 TestStartStop/group/newest-cni/serial/Stop 3.13
299 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
300 TestStartStop/group/embed-certs/serial/Pause 2.75
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
302 TestStartStop/group/newest-cni/serial/SecondStart 75.23
303 TestNetworkPlugins/group/auto/Start 94.64
304 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
307 TestStartStop/group/newest-cni/serial/Pause 2.54
308 TestNetworkPlugins/group/kindnet/Start 80.44
309 TestNetworkPlugins/group/auto/KubeletFlags 0.23
310 TestNetworkPlugins/group/auto/NetCatPod 11.35
311 TestNetworkPlugins/group/auto/DNS 0.16
312 TestNetworkPlugins/group/auto/Localhost 0.13
313 TestNetworkPlugins/group/auto/HairPin 0.13
314 TestNetworkPlugins/group/calico/Start 101.32
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
318 TestStartStop/group/no-preload/serial/Pause 2.79
319 TestNetworkPlugins/group/custom-flannel/Start 98.63
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
322 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
323 TestNetworkPlugins/group/kindnet/DNS 0.17
324 TestNetworkPlugins/group/kindnet/Localhost 0.13
325 TestNetworkPlugins/group/kindnet/HairPin 0.14
326 TestNetworkPlugins/group/enable-default-cni/Start 109.17
327 TestNetworkPlugins/group/calico/ControllerPod 5.02
328 TestNetworkPlugins/group/calico/KubeletFlags 0.46
329 TestNetworkPlugins/group/calico/NetCatPod 12.54
330 TestNetworkPlugins/group/calico/DNS 0.18
331 TestNetworkPlugins/group/calico/Localhost 0.15
332 TestNetworkPlugins/group/calico/HairPin 0.15
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
334 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.35
335 TestNetworkPlugins/group/custom-flannel/DNS 0.19
336 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
337 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
338 TestNetworkPlugins/group/flannel/Start 99.34
339 TestNetworkPlugins/group/bridge/Start 93.35
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.26
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.45
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.34
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
349 TestNetworkPlugins/group/flannel/ControllerPod 5.02
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
351 TestNetworkPlugins/group/flannel/NetCatPod 10.29
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
353 TestNetworkPlugins/group/bridge/NetCatPod 10.36
354 TestNetworkPlugins/group/flannel/DNS 0.17
355 TestNetworkPlugins/group/flannel/Localhost 0.14
356 TestNetworkPlugins/group/flannel/HairPin 0.13
357 TestNetworkPlugins/group/bridge/DNS 0.17
358 TestNetworkPlugins/group/bridge/Localhost 0.13
359 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (34.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-240312 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-240312 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (34.544500155s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (34.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-240312
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-240312: exit status 85 (63.992077ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-240312 | jenkins | v1.29.0 | 07 Mar 23 18:01 UTC |          |
	|         | -p download-only-240312        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:01:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:01:54.641936   11118 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:01:54.642163   11118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:54.642173   11118 out.go:309] Setting ErrFile to fd 2...
	I0307 18:01:54.642179   11118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:54.642274   11118 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	W0307 18:01:54.642379   11118 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15985-4052/.minikube/config/config.json: open /home/jenkins/minikube-integration/15985-4052/.minikube/config/config.json: no such file or directory
	I0307 18:01:54.642930   11118 out.go:303] Setting JSON to true
	I0307 18:01:54.643672   11118 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2663,"bootTime":1678209452,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:01:54.643739   11118 start.go:135] virtualization: kvm guest
	I0307 18:01:54.646551   11118 out.go:97] [download-only-240312] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0307 18:01:54.646640   11118 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 18:01:54.648378   11118 out.go:169] MINIKUBE_LOCATION=15985
	I0307 18:01:54.646685   11118 notify.go:220] Checking for updates...
	I0307 18:01:54.651543   11118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:01:54.653153   11118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:01:54.654839   11118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:01:54.656424   11118 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0307 18:01:54.659243   11118 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:01:54.659411   11118 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:01:54.772737   11118 out.go:97] Using the kvm2 driver based on user configuration
	I0307 18:01:54.772772   11118 start.go:296] selected driver: kvm2
	I0307 18:01:54.772778   11118 start.go:857] validating driver "kvm2" against <nil>
	I0307 18:01:54.773051   11118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:01:54.773154   11118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0307 18:01:54.786988   11118 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0307 18:01:54.787041   11118 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 18:01:54.787485   11118 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0307 18:01:54.787631   11118 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 18:01:54.787659   11118 cni.go:84] Creating CNI manager for ""
	I0307 18:01:54.787675   11118 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:01:54.787680   11118 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 18:01:54.787689   11118 start_flags.go:319] config:
	{Name:download-only-240312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-240312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:01:54.787857   11118 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:01:54.789877   11118 out.go:97] Downloading VM boot image ...
	I0307 18:01:54.789933   11118 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/iso/amd64/minikube-v1.29.0-1677261626-15923-amd64.iso
	I0307 18:02:06.799552   11118 out.go:97] Starting control plane node download-only-240312 in cluster download-only-240312
	I0307 18:02:06.799575   11118 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0307 18:02:06.952404   11118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0307 18:02:06.952452   11118 cache.go:57] Caching tarball of preloaded images
	I0307 18:02:06.952606   11118 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0307 18:02:06.954849   11118 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0307 18:02:06.954874   11118 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:02:07.114843   11118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-240312"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (27.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-240312 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-240312 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (27.23252237s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (27.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-240312
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-240312: exit status 85 (64.03486ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-240312 | jenkins | v1.29.0 | 07 Mar 23 18:01 UTC |          |
	|         | -p download-only-240312        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-240312 | jenkins | v1.29.0 | 07 Mar 23 18:02 UTC |          |
	|         | -p download-only-240312        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 18:02:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:02:29.250609   11154 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:02:29.250708   11154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:02:29.250716   11154 out.go:309] Setting ErrFile to fd 2...
	I0307 18:02:29.250720   11154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:02:29.250821   11154 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	W0307 18:02:29.251231   11154 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15985-4052/.minikube/config/config.json: open /home/jenkins/minikube-integration/15985-4052/.minikube/config/config.json: no such file or directory
	I0307 18:02:29.252052   11154 out.go:303] Setting JSON to true
	I0307 18:02:29.253096   11154 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2697,"bootTime":1678209452,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:02:29.253161   11154 start.go:135] virtualization: kvm guest
	I0307 18:02:29.255252   11154 out.go:97] [download-only-240312] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:02:29.256802   11154 out.go:169] MINIKUBE_LOCATION=15985
	I0307 18:02:29.255399   11154 notify.go:220] Checking for updates...
	I0307 18:02:29.259623   11154 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:02:29.261126   11154 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:02:29.262446   11154 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:02:29.263818   11154 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0307 18:02:29.266670   11154 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 18:02:29.267081   11154 config.go:182] Loaded profile config "download-only-240312": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0307 18:02:29.267139   11154 start.go:765] api.Load failed for download-only-240312: filestore "download-only-240312": Docker machine "download-only-240312" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0307 18:02:29.267201   11154 driver.go:365] Setting default libvirt URI to qemu:///system
	W0307 18:02:29.267240   11154 start.go:765] api.Load failed for download-only-240312: filestore "download-only-240312": Docker machine "download-only-240312" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0307 18:02:29.298994   11154 out.go:97] Using the kvm2 driver based on existing profile
	I0307 18:02:29.299034   11154 start.go:296] selected driver: kvm2
	I0307 18:02:29.299041   11154 start.go:857] validating driver "kvm2" against &{Name:download-only-240312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-240312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:02:29.299392   11154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:02:29.299468   11154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0307 18:02:29.313855   11154 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0307 18:02:29.314509   11154 cni.go:84] Creating CNI manager for ""
	I0307 18:02:29.314528   11154 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0307 18:02:29.314537   11154 start_flags.go:319] config:
	{Name:download-only-240312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:download-only-240312 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:02:29.314765   11154 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:02:29.316773   11154 out.go:97] Starting control plane node download-only-240312 in cluster download-only-240312
	I0307 18:02:29.316791   11154 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime containerd
	I0307 18:02:29.961875   11154 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4
	I0307 18:02:29.961944   11154 cache.go:57] Caching tarball of preloaded images
	I0307 18:02:29.962097   11154 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime containerd
	I0307 18:02:30.010026   11154 out.go:97] Downloading Kubernetes v1.26.2 preload ...
	I0307 18:02:30.010093   11154 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:02:30.703393   11154 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:9732ab8cab6f650b8db71c83489fbd15 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4
	I0307 18:02:51.901157   11154 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:02:51.901251   11154 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	I0307 18:02:53.206685   11154 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on containerd
	I0307 18:02:53.206861   11154 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/download-only-240312/config.json ...
	I0307 18:02:53.207109   11154 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime containerd
	I0307 18:02:53.207330   11154 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/linux/amd64/v1.26.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-240312"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-240312
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-026053 --alsologtostderr --binary-mirror http://127.0.0.1:35251 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-026053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-026053
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (98.71s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-042499 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-042499 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m37.602887389s)
helpers_test.go:175: Cleaning up "offline-containerd-042499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-042499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-042499: (1.10771281s)
--- PASS: TestOffline (98.71s)

                                                
                                    
x
+
TestAddons/Setup (147.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-628397 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-628397 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.528847462s)
--- PASS: TestAddons/Setup (147.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 16.021068ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5db8x" [e269f8cc-291b-49f4-89f6-c666acf8587e] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017697958s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k4jvj" [7a6e1351-eae3-40c5-b8af-7ac5dbc1baae] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009820603s
addons_test.go:305: (dbg) Run:  kubectl --context addons-628397 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-628397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-628397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.895997423s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 ip
2023/03/07 18:05:45 [DEBUG] GET http://192.168.39.149:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-628397 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-628397 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.664750228s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-628397 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Non-zero exit: kubectl --context addons-628397 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (811.249607ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.101.137.214:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:197: (dbg) Run:  kubectl --context addons-628397 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-628397 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4d9f2fc5-8827-4dd1-8334-aa9a03f4795e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4d9f2fc5-8827-4dd1-8334-aa9a03f4795e] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.014206971s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-628397 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.149
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-628397 addons disable ingress-dns --alsologtostderr -v=1: (1.234360583s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-628397 addons disable ingress --alsologtostderr -v=1: (7.583773494s)
--- PASS: TestAddons/parallel/Ingress (25.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 16.128165ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-cd8kg" [0fc946b8-cb38-4370-8529-123ec58945fa] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013996155s
addons_test.go:380: (dbg) Run:  kubectl --context addons-628397 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.445316ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-qn5l9" [481704e4-95a0-433e-a2ce-2064d50d0ed0] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008322147s
addons_test.go:438: (dbg) Run:  kubectl --context addons-628397 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-628397 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.078769903s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 20.226723ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-628397 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-628397 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2db0cdf7-a5da-4c18-809c-d35e94633277] Pending
helpers_test.go:344: "task-pv-pod" [2db0cdf7-a5da-4c18-809c-d35e94633277] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2db0cdf7-a5da-4c18-809c-d35e94633277] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.011285968s
addons_test.go:549: (dbg) Run:  kubectl --context addons-628397 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-628397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-628397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-628397 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-628397 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-628397 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-628397 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4c78bee0-4649-4379-8030-0e971fea6b41] Pending
helpers_test.go:344: "task-pv-pod-restore" [4c78bee0-4649-4379-8030-0e971fea6b41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4c78bee0-4649-4379-8030-0e971fea6b41] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.011116188s
addons_test.go:591: (dbg) Run:  kubectl --context addons-628397 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-628397 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-628397 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-628397 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.55007908s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-628397 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-628397 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-628397 --alsologtostderr -v=1: (1.060795045s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-wfxtj" [36d81a2e-c302-4071-8201-fd310f18bfc6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-wfxtj" [36d81a2e-c302-4071-8201-fd310f18bfc6] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.007145524s
--- PASS: TestAddons/parallel/Headlamp (13.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-sdz72" [3f301ea1-0eda-4022-bbc8-36de8ee22f83] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006101436s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-628397
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-628397 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-628397 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-628397
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-628397: (1m31.833440995s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-628397
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-628397
--- PASS: TestAddons/StoppedEnableDisable (92.01s)

                                                
                                    
x
+
TestCertOptions (77.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-367374 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-367374 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m15.390828229s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-367374 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-367374 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-367374 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-367374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-367374
E0307 19:10:08.837821   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-367374: (1.221723931s)
--- PASS: TestCertOptions (77.19s)

                                                
                                    
x
+
TestCertExpiration (254.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-949300 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0307 19:06:58.626810   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-949300 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m7.370476977s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-949300 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-949300 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (6.139410716s)
helpers_test.go:175: Cleaning up "cert-expiration-949300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-949300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-949300: (1.11705395s)
--- PASS: TestCertExpiration (254.63s)

                                                
                                    
x
+
TestForceSystemdFlag (82.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-400557 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0307 19:07:15.578647   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-400557 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m20.920458564s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-400557 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-400557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-400557
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-400557: (1.133815417s)
--- PASS: TestForceSystemdFlag (82.27s)

                                                
                                    
x
+
TestForceSystemdEnv (58.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-371474 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-371474 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (56.642050027s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-371474 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-371474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-371474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-371474: (1.241744874s)
--- PASS: TestForceSystemdEnv (58.13s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (13.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (13.92s)

                                                
                                    
x
+
TestErrorSpam/setup (55.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-943150 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-943150 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-943150 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-943150 --driver=kvm2  --container-runtime=containerd: (55.659826204s)
--- PASS: TestErrorSpam/setup (55.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 pause
--- PASS: TestErrorSpam/pause (1.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 stop: (2.405857364s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-943150 --log_dir /tmp/nospam-943150 stop
--- PASS: TestErrorSpam/stop (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/test/nested/copy/11106/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (107.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0307 18:10:25.776233   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:25.782025   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:25.792265   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:25.812497   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:25.852750   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:25.933080   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:26.093523   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:26.414124   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:27.055016   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:28.335482   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:30.895936   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:36.016745   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:10:46.257003   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-244351 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m47.776945484s)
--- PASS: TestFunctional/serial/StartWithProxy (107.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --alsologtostderr -v=8
E0307 18:11:06.737436   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-244351 --alsologtostderr -v=8: (6.17411575s)
functional_test.go:658: soft start took 6.17467285s for "functional-244351" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-244351 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (14.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:3.1: (4.803680164s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:3.3: (4.991347939s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 cache add k8s.gcr.io/pause:latest: (4.892743708s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (14.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-244351 /tmp/TestFunctionalserialCacheCmdcacheadd_local2901331054/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache add minikube-local-cache-test:functional-244351
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 cache add minikube-local-cache-test:functional-244351: (2.862899329s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache delete minikube-local-cache-test:functional-244351
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-244351
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (214.789339ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 cache reload: (3.408970195s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 kubectl -- --context functional-244351 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-244351 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 18:11:47.698889   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-244351 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.457584215s)
functional_test.go:756: restart took 38.457681674s for "functional-244351" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-244351 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 logs: (1.366430996s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 logs --file /tmp/TestFunctionalserialLogsFileCmd1174193727/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 logs --file /tmp/TestFunctionalserialLogsFileCmd1174193727/001/logs.txt: (1.298770691s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 config get cpus: exit status 14 (59.001334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 config get cpus: exit status 14 (56.366059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (24.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244351 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244351 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17498: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (24.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (131.958919ms)

                                                
                                                
-- stdout --
	* [functional-244351] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:12:34.069279   16905 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:12:34.069464   16905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:12:34.069474   16905 out.go:309] Setting ErrFile to fd 2...
	I0307 18:12:34.069480   16905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:12:34.069596   16905 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:12:34.070155   16905 out.go:303] Setting JSON to false
	I0307 18:12:34.071094   16905 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3302,"bootTime":1678209452,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:12:34.071159   16905 start.go:135] virtualization: kvm guest
	I0307 18:12:34.073777   16905 out.go:177] * [functional-244351] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 18:12:34.075432   16905 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:12:34.075403   16905 notify.go:220] Checking for updates...
	I0307 18:12:34.077009   16905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:12:34.078610   16905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:12:34.080056   16905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:12:34.081713   16905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:12:34.083257   16905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:12:34.085101   16905 config.go:182] Loaded profile config "functional-244351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0307 18:12:34.085451   16905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:12:34.085491   16905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:12:34.100163   16905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33293
	I0307 18:12:34.100541   16905 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:12:34.101035   16905 main.go:141] libmachine: Using API Version  1
	I0307 18:12:34.101056   16905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:12:34.101408   16905 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:12:34.101574   16905 main.go:141] libmachine: (functional-244351) Calling .DriverName
	I0307 18:12:34.101758   16905 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:12:34.102069   16905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:12:34.102108   16905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:12:34.115829   16905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0307 18:12:34.116173   16905 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:12:34.116623   16905 main.go:141] libmachine: Using API Version  1
	I0307 18:12:34.116647   16905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:12:34.116916   16905 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:12:34.117105   16905 main.go:141] libmachine: (functional-244351) Calling .DriverName
	I0307 18:12:34.149666   16905 out.go:177] * Using the kvm2 driver based on existing profile
	I0307 18:12:34.151122   16905 start.go:296] selected driver: kvm2
	I0307 18:12:34.151139   16905 start.go:857] validating driver "kvm2" against &{Name:functional-244351 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.2 ClusterName:functional-244351 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.145 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:12:34.151277   16905 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:12:34.153424   16905 out.go:177] 
	W0307 18:12:34.154935   16905 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 18:12:34.156421   16905 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (141.561235ms)

                                                
                                                
-- stdout --
	* [functional-244351] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:12:34.341119   16985 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:12:34.341227   16985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:12:34.341235   16985 out.go:309] Setting ErrFile to fd 2...
	I0307 18:12:34.341239   16985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:12:34.341378   16985 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:12:34.341879   16985 out.go:303] Setting JSON to false
	I0307 18:12:34.343030   16985 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3302,"bootTime":1678209452,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 18:12:34.343151   16985 start.go:135] virtualization: kvm guest
	I0307 18:12:34.346019   16985 out.go:177] * [functional-244351] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0307 18:12:34.347938   16985 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 18:12:34.349235   16985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:12:34.347952   16985 notify.go:220] Checking for updates...
	I0307 18:12:34.351775   16985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 18:12:34.353375   16985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 18:12:34.354819   16985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 18:12:34.356295   16985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:12:34.357886   16985 config.go:182] Loaded profile config "functional-244351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0307 18:12:34.358281   16985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:12:34.358335   16985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:12:34.374394   16985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I0307 18:12:34.374714   16985 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:12:34.375231   16985 main.go:141] libmachine: Using API Version  1
	I0307 18:12:34.375255   16985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:12:34.375560   16985 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:12:34.375736   16985 main.go:141] libmachine: (functional-244351) Calling .DriverName
	I0307 18:12:34.375902   16985 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 18:12:34.376163   16985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:12:34.376194   16985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:12:34.393981   16985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0307 18:12:34.394353   16985 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:12:34.394829   16985 main.go:141] libmachine: Using API Version  1
	I0307 18:12:34.394853   16985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:12:34.395129   16985 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:12:34.395285   16985 main.go:141] libmachine: (functional-244351) Calling .DriverName
	I0307 18:12:34.429130   16985 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0307 18:12:34.430679   16985 start.go:296] selected driver: kvm2
	I0307 18:12:34.430697   16985 start.go:857] validating driver "kvm2" against &{Name:functional-244351 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.2 ClusterName:functional-244351 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.145 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 18:12:34.430813   16985 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:12:34.433365   16985 out.go:177] 
	W0307 18:12:34.434973   16985 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 18:12:34.436460   16985 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-244351 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-244351 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-t5f5g" [8c8be380-e669-45d9-ac9a-d1d8190e1fb5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-t5f5g" [8c8be380-e669-45d9-ac9a-d1d8190e1fb5] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.017476382s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.50.145:31821
functional_test.go:1673: http://192.168.50.145:31821: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-t5f5g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.145:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.145:31821
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [50e27039-f345-4763-af19-ea2f435b1892] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.021075197s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-244351 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-244351 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244351 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244351 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244351 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d71fe8b-e244-4003-8c8f-678e0666aa79] Pending
helpers_test.go:344: "sp-pod" [0d71fe8b-e244-4003-8c8f-678e0666aa79] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d71fe8b-e244-4003-8c8f-678e0666aa79] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.012769939s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-244351 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-244351 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-244351 delete -f testdata/storage-provisioner/pod.yaml: (2.061823323s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244351 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c58f7087-0e05-44cd-8517-c74c1181eb74] Pending
helpers_test.go:344: "sp-pod" [c58f7087-0e05-44cd-8517-c74c1181eb74] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c58f7087-0e05-44cd-8517-c74c1181eb74] Running
2023/03/07 18:13:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.00736231s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-244351 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh -n functional-244351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 cp functional-244351:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4131480119/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh -n functional-244351 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-244351 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-9c962" [0979b09e-87bd-4b1a-a11e-096ac8cb8de7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-9c962" [0979b09e-87bd-4b1a-a11e-096ac8cb8de7] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.009455823s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;": exit status 1 (294.412845ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;": exit status 1 (201.257907ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;": exit status 1 (193.634462ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-244351 exec mysql-888f84dd9-9c962 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/11106/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /etc/test/nested/copy/11106/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/11106.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /etc/ssl/certs/11106.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/11106.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /usr/share/ca-certificates/11106.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/111062.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /etc/ssl/certs/111062.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/111062.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /usr/share/ca-certificates/111062.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-244351 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh "sudo systemctl is-active docker": exit status 1 (205.785236ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh "sudo systemctl is-active crio": exit status 1 (210.131599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-244351 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-244351 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-ppjjw" [2f381982-9088-486c-ba49-9e564f93c3bb] Pending
helpers_test.go:344: "hello-node-6fddd6858d-ppjjw" [2f381982-9088-486c-ba49-9e564f93c3bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-ppjjw" [2f381982-9088-486c-ba49-9e564f93c3bb] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.012315183s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "302.092236ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "49.398157ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "347.054959ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "50.136153ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244351 /tmp/TestFunctionalparallelMountCmdany-port2594393162/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1678212737226260595" to /tmp/TestFunctionalparallelMountCmdany-port2594393162/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1678212737226260595" to /tmp/TestFunctionalparallelMountCmdany-port2594393162/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1678212737226260595" to /tmp/TestFunctionalparallelMountCmdany-port2594393162/001/test-1678212737226260595
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.676401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 18:12 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 18:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 18:12 test-1678212737226260595
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh cat /mount-9p/test-1678212737226260595
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-244351 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4e5b499c-d254-43a2-9c6e-0ad0fba21804] Pending
helpers_test.go:344: "busybox-mount" [4e5b499c-d254-43a2-9c6e-0ad0fba21804] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4e5b499c-d254-43a2-9c6e-0ad0fba21804] Running
helpers_test.go:344: "busybox-mount" [4e5b499c-d254-43a2-9c6e-0ad0fba21804] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4e5b499c-d254-43a2-9c6e-0ad0fba21804] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.009770441s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-244351 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244351 /tmp/TestFunctionalparallelMountCmdany-port2594393162/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 version -o=json --components: (1.185631991s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244351 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-244351
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-244351
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244351 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/kube-proxy                  | v1.26.2            | sha256:6f64e7 | 21.5MB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| localhost/my-image                          | functional-244351  | sha256:e6ca67 | 775kB  |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/etcd                        | 3.5.6-0            | sha256:fce326 | 103MB  |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| docker.io/library/minikube-local-cache-test | functional-244351  | sha256:113372 | 1.12kB |
| gcr.io/google-containers/addon-resizer      | functional-244351  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-controller-manager     | v1.26.2            | sha256:240e20 | 32.2MB |
| registry.k8s.io/kube-scheduler              | v1.26.2            | sha256:db8f40 | 17.5MB |
| docker.io/library/nginx                     | latest             | sha256:904b8c | 56.9MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.26.2            | sha256:63d323 | 35.3MB |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244351 image ls --format json:
[{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":["registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c"],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"102542580"},{"id":"sha256:63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"35329425"},{"id":"sha256:240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5
780211d973644ac1e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"32180749"},{"id":"sha256:113372f00fdd0a23c4b23fe971a66ccf777539940f1dcf4a47148470bd713567","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-244351"],"size":"1119"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d4
44987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:e6ca67d231bed246934838aa266e232a4967bfd78961fb5da452af33e0cbc927","repoDigests":[],"repoTags":["localhost/my-image:functional-244351"],"size":"775202"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":["docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2"],"repoTags":["docker.io/library/nginx:latest"],"size":"56897427"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/add
on-resizer:functional-244351"],"size":"10823156"},{"id":"sha256:db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417"],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"17489559"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDigests":["registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078"],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"21541935"},{"id":"sha
256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244351 image ls --format yaml:
- id: sha256:113372f00fdd0a23c4b23fe971a66ccf777539940f1dcf4a47148470bd713567
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-244351
size: "1119"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests:
- registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "102542580"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "35329425"
- id: sha256:6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "21541935"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "32180749"
- id: sha256:db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "17489559"
- id: sha256:904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests:
- docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
repoTags:
- docker.io/library/nginx:latest
size: "56897427"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-244351
size: "10823156"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh pgrep buildkitd: exit status 1 (300.82945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image build -t localhost/my-image:functional-244351 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image build -t localhost/my-image:functional-244351 testdata/build: (4.955937293s)
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244351 image build -t localhost/my-image:functional-244351 testdata/build:
#1 [internal] load .dockerignore
#1 transferring context:
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:17d0617a865a0e281591888eb29a65f534dfb48408dfa9e272536ab71986b89e 0.0s done
#8 exporting config sha256:e6ca67d231bed246934838aa266e232a4967bfd78961fb5da452af33e0cbc927
#8 exporting config sha256:e6ca67d231bed246934838aa266e232a4967bfd78961fb5da452af33e0cbc927 0.0s done
#8 naming to localhost/my-image:functional-244351 0.0s done
#8 DONE 0.2s
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.716369178s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-244351
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351: (3.813611436s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351: (3.648566825s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.695508199s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image load --daemon gcr.io/google-containers/addon-resizer:functional-244351: (4.247551863s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service list -o json
functional_test.go:1492: Took "316.988769ms" to run "out/minikube-linux-amd64 -p functional-244351 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.50.145:30213
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244351 /tmp/TestFunctionalparallelMountCmdspecific-port2394082487/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.2202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244351 /tmp/TestFunctionalparallelMountCmdspecific-port2394082487/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244351 ssh "sudo umount -f /mount-9p": exit status 1 (213.063866ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-244351 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244351 /tmp/TestFunctionalparallelMountCmdspecific-port2394082487/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.50.145:30213
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image save gcr.io/google-containers/addon-resizer:functional-244351 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image save gcr.io/google-containers/addon-resizer:functional-244351 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.298404433s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image rm gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.180024503s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-244351 image save --daemon gcr.io/google-containers/addon-resizer:functional-244351
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-244351 image save --daemon gcr.io/google-containers/addon-resizer:functional-244351: (1.154257265s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-244351
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-244351
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-244351
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-244351
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (100.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-857097 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0307 18:13:09.619313   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-857097 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m40.438429276s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (100.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons enable ingress --alsologtostderr -v=5: (18.811684087s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-857097 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-857097 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.018363363s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-857097 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-857097 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f0f99bbf-98ca-481c-ba44-1a88d4d595f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f0f99bbf-98ca-481c-ba44-1a88d4d595f8] Running
E0307 18:15:25.776250   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.010394849s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-857097 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.241
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons disable ingress-dns --alsologtostderr -v=1: (2.248092466s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-857097 addons disable ingress --alsologtostderr -v=1: (7.347868855s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-215845 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0307 18:15:53.462052   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-215845 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m11.454313282s)
--- PASS: TestJSONOutput/start/Command (71.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-215845 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-215845 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-215845 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-215845 --output=json --user=testUser: (7.090821946s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-577246 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-577246 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.668257ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e7a1581-03bc-49e8-86a0-4ce59e3f4c5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-577246] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"01c54aa6-f1ce-454d-a920-2c835412671c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15985"}}
	{"specversion":"1.0","id":"64d7fc09-d64e-4505-8448-b02fa5c27e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff3d5222-5778-407f-a85b-6633b82067a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig"}}
	{"specversion":"1.0","id":"9679423f-395a-445b-a37c-97b2703464af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube"}}
	{"specversion":"1.0","id":"ea2f2031-de39-4658-852e-5706fe0bb361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a810cbcd-1a5d-4fb7-8135-aa7a1f346c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f65c3a36-dcca-48ed-9f7b-6b0c79afed28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-577246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-577246
--- PASS: TestErrorJSONOutput (0.43s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (112.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-795482 --driver=kvm2  --container-runtime=containerd
E0307 18:17:15.578365   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.583634   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.593873   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.614114   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.654361   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.734670   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:15.895185   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:16.215758   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:16.856577   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:18.137100   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:20.698132   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:25.819220   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:17:36.060290   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-795482 --driver=kvm2  --container-runtime=containerd: (53.43654064s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-798474 --driver=kvm2  --container-runtime=containerd
E0307 18:17:56.540774   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:18:37.501390   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-798474 --driver=kvm2  --container-runtime=containerd: (56.601584136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-795482
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-798474
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-798474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-798474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-798474: (1.04418665s)
helpers_test.go:175: Cleaning up "first-795482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-795482
--- PASS: TestMinikubeProfile (112.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-698627 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-698627 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.729324549s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-698627 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-698627 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-712098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-712098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.31939673s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-698627 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-712098
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-712098: (1.120893213s)
--- PASS: TestMountStart/serial/Stop (1.12s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-712098
E0307 18:19:59.422394   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:20:08.839383   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:08.844631   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:08.854875   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:08.875161   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:08.915496   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:08.995853   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:09.156416   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:09.477031   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:10.118028   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:11.398515   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:13.960299   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:19.080535   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-712098: (22.31431969s)
--- PASS: TestMountStart/serial/RestartStopped (23.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (152.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-373242 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0307 18:20:25.775767   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:20:29.321200   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:20:49.801939   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:21:30.762363   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:22:15.577897   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:22:43.263254   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:22:52.683129   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-373242 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m31.687202045s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (152.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-373242 -- rollout status deployment/busybox: (3.847600989s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-brcl2 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-t96mj -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-brcl2 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-t96mj -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-brcl2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-t96mj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-brcl2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-brcl2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-t96mj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-373242 -- exec busybox-6b86dd6d48-t96mj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (71.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-373242 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-373242 -v 3 --alsologtostderr: (1m10.492097506s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (71.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp testdata/cp-test.txt multinode-373242:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1372391232/001/cp-test_multinode-373242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242:/home/docker/cp-test.txt multinode-373242-m02:/home/docker/cp-test_multinode-373242_multinode-373242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test_multinode-373242_multinode-373242-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242:/home/docker/cp-test.txt multinode-373242-m03:/home/docker/cp-test_multinode-373242_multinode-373242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test_multinode-373242_multinode-373242-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp testdata/cp-test.txt multinode-373242-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1372391232/001/cp-test_multinode-373242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m02:/home/docker/cp-test.txt multinode-373242:/home/docker/cp-test_multinode-373242-m02_multinode-373242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test_multinode-373242-m02_multinode-373242.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m02:/home/docker/cp-test.txt multinode-373242-m03:/home/docker/cp-test_multinode-373242-m02_multinode-373242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test_multinode-373242-m02_multinode-373242-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp testdata/cp-test.txt multinode-373242-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1372391232/001/cp-test_multinode-373242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt multinode-373242:/home/docker/cp-test_multinode-373242-m03_multinode-373242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242 "sudo cat /home/docker/cp-test_multinode-373242-m03_multinode-373242.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt multinode-373242-m02:/home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 ssh -n multinode-373242-m02 "sudo cat /home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-373242 node stop m03: (1.255846059s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-373242 status: exit status 7 (421.836871ms)

                                                
                                                
-- stdout --
	multinode-373242
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-373242-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-373242-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr: exit status 7 (432.689459ms)

                                                
                                                
-- stdout --
	multinode-373242
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-373242-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-373242-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:24:22.452058   23898 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:24:22.452238   23898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:24:22.452247   23898 out.go:309] Setting ErrFile to fd 2...
	I0307 18:24:22.452252   23898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:24:22.452351   23898 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:24:22.452519   23898 out.go:303] Setting JSON to false
	I0307 18:24:22.452547   23898 mustload.go:65] Loading cluster: multinode-373242
	I0307 18:24:22.452655   23898 notify.go:220] Checking for updates...
	I0307 18:24:22.452934   23898 config.go:182] Loaded profile config "multinode-373242": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0307 18:24:22.452948   23898 status.go:255] checking status of multinode-373242 ...
	I0307 18:24:22.453377   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.453430   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.468361   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I0307 18:24:22.468811   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.469326   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.469350   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.469738   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.469896   23898 main.go:141] libmachine: (multinode-373242) Calling .GetState
	I0307 18:24:22.471408   23898 status.go:330] multinode-373242 host status = "Running" (err=<nil>)
	I0307 18:24:22.471427   23898 host.go:66] Checking if "multinode-373242" exists ...
	I0307 18:24:22.471713   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.471745   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.486621   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I0307 18:24:22.487009   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.487452   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.487474   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.487814   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.487981   23898 main.go:141] libmachine: (multinode-373242) Calling .GetIP
	I0307 18:24:22.490735   23898 main.go:141] libmachine: (multinode-373242) DBG | domain multinode-373242 has defined MAC address 52:54:00:69:59:9d in network mk-multinode-373242
	I0307 18:24:22.491108   23898 main.go:141] libmachine: (multinode-373242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:59:9d", ip: ""} in network mk-multinode-373242: {Iface:virbr1 ExpiryTime:2023-03-07 19:20:38 +0000 UTC Type:0 Mac:52:54:00:69:59:9d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-373242 Clientid:01:52:54:00:69:59:9d}
	I0307 18:24:22.491146   23898 main.go:141] libmachine: (multinode-373242) DBG | domain multinode-373242 has defined IP address 192.168.39.227 and MAC address 52:54:00:69:59:9d in network mk-multinode-373242
	I0307 18:24:22.491272   23898 host.go:66] Checking if "multinode-373242" exists ...
	I0307 18:24:22.491536   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.491571   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.506929   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41529
	I0307 18:24:22.507386   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.507885   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.507905   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.508211   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.508382   23898 main.go:141] libmachine: (multinode-373242) Calling .DriverName
	I0307 18:24:22.508568   23898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:24:22.508598   23898 main.go:141] libmachine: (multinode-373242) Calling .GetSSHHostname
	I0307 18:24:22.511520   23898 main.go:141] libmachine: (multinode-373242) DBG | domain multinode-373242 has defined MAC address 52:54:00:69:59:9d in network mk-multinode-373242
	I0307 18:24:22.511952   23898 main.go:141] libmachine: (multinode-373242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:59:9d", ip: ""} in network mk-multinode-373242: {Iface:virbr1 ExpiryTime:2023-03-07 19:20:38 +0000 UTC Type:0 Mac:52:54:00:69:59:9d Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-373242 Clientid:01:52:54:00:69:59:9d}
	I0307 18:24:22.511979   23898 main.go:141] libmachine: (multinode-373242) DBG | domain multinode-373242 has defined IP address 192.168.39.227 and MAC address 52:54:00:69:59:9d in network mk-multinode-373242
	I0307 18:24:22.512180   23898 main.go:141] libmachine: (multinode-373242) Calling .GetSSHPort
	I0307 18:24:22.512365   23898 main.go:141] libmachine: (multinode-373242) Calling .GetSSHKeyPath
	I0307 18:24:22.512544   23898 main.go:141] libmachine: (multinode-373242) Calling .GetSSHUsername
	I0307 18:24:22.512710   23898 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/multinode-373242/id_rsa Username:docker}
	I0307 18:24:22.598513   23898 ssh_runner.go:195] Run: systemctl --version
	I0307 18:24:22.604317   23898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:24:22.620087   23898 kubeconfig.go:92] found "multinode-373242" server: "https://192.168.39.227:8443"
	I0307 18:24:22.620110   23898 api_server.go:165] Checking apiserver status ...
	I0307 18:24:22.620151   23898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:24:22.631939   23898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1083/cgroup
	I0307 18:24:22.640121   23898 api_server.go:181] apiserver freezer: "2:freezer:/kubepods/burstable/poda8586a4e0c22c05ef0bd9dfdafce8fea/2c59bcfee32d5a9e03b0ff316a1707baa719b28d0e8cd9cd09f923541da47ba4"
	I0307 18:24:22.640190   23898 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda8586a4e0c22c05ef0bd9dfdafce8fea/2c59bcfee32d5a9e03b0ff316a1707baa719b28d0e8cd9cd09f923541da47ba4/freezer.state
	I0307 18:24:22.650052   23898 api_server.go:203] freezer state: "THAWED"
	I0307 18:24:22.650074   23898 api_server.go:252] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0307 18:24:22.655911   23898 api_server.go:278] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0307 18:24:22.655931   23898 status.go:421] multinode-373242 apiserver status = Running (err=<nil>)
	I0307 18:24:22.655939   23898 status.go:257] multinode-373242 status: &{Name:multinode-373242 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:24:22.655952   23898 status.go:255] checking status of multinode-373242-m02 ...
	I0307 18:24:22.656247   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.656286   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.670940   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33813
	I0307 18:24:22.671337   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.671815   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.671841   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.672254   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.672451   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetState
	I0307 18:24:22.673827   23898 status.go:330] multinode-373242-m02 host status = "Running" (err=<nil>)
	I0307 18:24:22.673853   23898 host.go:66] Checking if "multinode-373242-m02" exists ...
	I0307 18:24:22.674183   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.674217   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.688081   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I0307 18:24:22.688491   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.688944   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.688968   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.689258   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.689426   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetIP
	I0307 18:24:22.692165   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | domain multinode-373242-m02 has defined MAC address 52:54:00:6b:aa:a7 in network mk-multinode-373242
	I0307 18:24:22.692652   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:aa:a7", ip: ""} in network mk-multinode-373242: {Iface:virbr1 ExpiryTime:2023-03-07 19:21:56 +0000 UTC Type:0 Mac:52:54:00:6b:aa:a7 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-373242-m02 Clientid:01:52:54:00:6b:aa:a7}
	I0307 18:24:22.692672   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | domain multinode-373242-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:6b:aa:a7 in network mk-multinode-373242
	I0307 18:24:22.692804   23898 host.go:66] Checking if "multinode-373242-m02" exists ...
	I0307 18:24:22.693124   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.693160   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.707265   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0307 18:24:22.707630   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.708059   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.708082   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.708348   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.708530   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .DriverName
	I0307 18:24:22.708716   23898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:24:22.708734   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetSSHHostname
	I0307 18:24:22.711270   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | domain multinode-373242-m02 has defined MAC address 52:54:00:6b:aa:a7 in network mk-multinode-373242
	I0307 18:24:22.711649   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:aa:a7", ip: ""} in network mk-multinode-373242: {Iface:virbr1 ExpiryTime:2023-03-07 19:21:56 +0000 UTC Type:0 Mac:52:54:00:6b:aa:a7 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-373242-m02 Clientid:01:52:54:00:6b:aa:a7}
	I0307 18:24:22.711679   23898 main.go:141] libmachine: (multinode-373242-m02) DBG | domain multinode-373242-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:6b:aa:a7 in network mk-multinode-373242
	I0307 18:24:22.711786   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetSSHPort
	I0307 18:24:22.711953   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetSSHKeyPath
	I0307 18:24:22.712087   23898 main.go:141] libmachine: (multinode-373242-m02) Calling .GetSSHUsername
	I0307 18:24:22.712218   23898 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/multinode-373242-m02/id_rsa Username:docker}
	I0307 18:24:22.805169   23898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:24:22.819412   23898 status.go:257] multinode-373242-m02 status: &{Name:multinode-373242-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:24:22.819454   23898 status.go:255] checking status of multinode-373242-m03 ...
	I0307 18:24:22.819776   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:24:22.819821   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:24:22.835447   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0307 18:24:22.835890   23898 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:24:22.836400   23898 main.go:141] libmachine: Using API Version  1
	I0307 18:24:22.836423   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:24:22.836707   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:24:22.836879   23898 main.go:141] libmachine: (multinode-373242-m03) Calling .GetState
	I0307 18:24:22.838377   23898 status.go:330] multinode-373242-m03 host status = "Stopped" (err=<nil>)
	I0307 18:24:22.838394   23898 status.go:343] host is not running, skipping remaining checks
	I0307 18:24:22.838402   23898 status.go:257] multinode-373242-m03 status: &{Name:multinode-373242-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (119.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 node start m03 --alsologtostderr
E0307 18:25:08.838088   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:25:25.776123   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:25:36.523999   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-373242 node start m03 --alsologtostderr: (1m59.173665865s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (119.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (548.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-373242
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-373242
E0307 18:26:48.824737   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:27:15.578284   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-373242: (3m4.951169338s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-373242 --wait=true -v=8 --alsologtostderr
E0307 18:30:08.837844   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:30:25.776557   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:32:15.578101   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:33:38.624116   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:35:08.837968   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:35:25.775753   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-373242 --wait=true -v=8 --alsologtostderr: (6m2.965547744s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-373242
--- PASS: TestMultiNode/serial/RestartKeepsNodes (548.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-373242 node delete m03: (1.564713998s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 stop
E0307 18:36:31.886313   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:37:15.578438   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-373242 stop: (3m3.271675837s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-373242 status: exit status 7 (79.295379ms)

                                                
                                                
-- stdout --
	multinode-373242
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-373242-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr: exit status 7 (80.006263ms)

                                                
                                                
-- stdout --
	multinode-373242
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-373242-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:38:36.160285   25095 out.go:296] Setting OutFile to fd 1 ...
	I0307 18:38:36.160469   25095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:38:36.160478   25095 out.go:309] Setting ErrFile to fd 2...
	I0307 18:38:36.160484   25095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 18:38:36.160582   25095 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 18:38:36.160752   25095 out.go:303] Setting JSON to false
	I0307 18:38:36.160776   25095 mustload.go:65] Loading cluster: multinode-373242
	I0307 18:38:36.160868   25095 notify.go:220] Checking for updates...
	I0307 18:38:36.161077   25095 config.go:182] Loaded profile config "multinode-373242": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0307 18:38:36.161088   25095 status.go:255] checking status of multinode-373242 ...
	I0307 18:38:36.161418   25095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:38:36.161474   25095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:38:36.175737   25095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0307 18:38:36.176185   25095 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:38:36.176768   25095 main.go:141] libmachine: Using API Version  1
	I0307 18:38:36.176795   25095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:38:36.177120   25095 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:38:36.177282   25095 main.go:141] libmachine: (multinode-373242) Calling .GetState
	I0307 18:38:36.178809   25095 status.go:330] multinode-373242 host status = "Stopped" (err=<nil>)
	I0307 18:38:36.178823   25095 status.go:343] host is not running, skipping remaining checks
	I0307 18:38:36.178829   25095 status.go:257] multinode-373242 status: &{Name:multinode-373242 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:38:36.178847   25095 status.go:255] checking status of multinode-373242-m02 ...
	I0307 18:38:36.179209   25095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0307 18:38:36.179247   25095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0307 18:38:36.193077   25095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0307 18:38:36.193385   25095 main.go:141] libmachine: () Calling .GetVersion
	I0307 18:38:36.193790   25095 main.go:141] libmachine: Using API Version  1
	I0307 18:38:36.193821   25095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 18:38:36.194100   25095 main.go:141] libmachine: () Calling .GetMachineName
	I0307 18:38:36.194255   25095 main.go:141] libmachine: (multinode-373242-m02) Calling .GetState
	I0307 18:38:36.195643   25095 status.go:330] multinode-373242-m02 host status = "Stopped" (err=<nil>)
	I0307 18:38:36.195659   25095 status.go:343] host is not running, skipping remaining checks
	I0307 18:38:36.195666   25095 status.go:257] multinode-373242-m02 status: &{Name:multinode-373242-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (237.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-373242 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0307 18:40:08.837709   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:40:25.776481   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:42:15.578026   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-373242 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m57.423694636s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-373242 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (237.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-373242
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-373242-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-373242-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (64.943118ms)

                                                
                                                
-- stdout --
	* [multinode-373242-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-373242-m02' is duplicated with machine name 'multinode-373242-m02' in profile 'multinode-373242'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-373242-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-373242-m03 --driver=kvm2  --container-runtime=containerd: (53.443134011s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-373242
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-373242: exit status 80 (210.137099ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-373242
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-373242-m03 already exists in multinode-373242-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-373242-m03
E0307 18:43:28.825106   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-373242-m03: (1.067997051s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.83s)

                                                
                                    
x
+
TestScheduledStopUnix (129.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-034399 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-034399 --memory=2048 --driver=kvm2  --container-runtime=containerd: (58.175825101s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-034399 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-034399 -n scheduled-stop-034399
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-034399 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-034399 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-034399 -n scheduled-stop-034399
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-034399
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-034399 --schedule 15s
E0307 19:02:15.578568   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-034399
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-034399: exit status 7 (64.360164ms)

                                                
                                                
-- stdout --
	scheduled-stop-034399
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-034399 -n scheduled-stop-034399
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-034399 -n scheduled-stop-034399: exit status 7 (63.363546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-034399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-034399
--- PASS: TestScheduledStopUnix (129.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (242.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.22.0.795002854.exe start -p running-upgrade-154499 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.22.0.795002854.exe start -p running-upgrade-154499 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m48.568132539s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-154499 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-154499 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m9.115960581s)
helpers_test.go:175: Cleaning up "running-upgrade-154499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-154499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-154499: (1.438325477s)
--- PASS: TestRunningBinaryUpgrade (242.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (239.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m4.115885353s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-239839
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-239839: (3.093983118s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-239839 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-239839 status --format={{.Host}}: exit status 7 (70.174118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0307 19:05:08.837482   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m1.534610448s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-239839 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (115.347874ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-239839] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-239839
	    minikube start -p kubernetes-upgrade-239839 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2398392 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-239839 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-239839 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (49.630284161s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-239839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-239839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-239839: (1.282209676s)
--- PASS: TestKubernetesUpgrade (239.91s)

                                                
                                    
x
+
TestPause/serial/Start (81.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-124193 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-124193 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m21.617743226s)
--- PASS: TestPause/serial/Start (81.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (93.453556ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-179055] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (134.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-179055 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-179055 --driver=kvm2  --container-runtime=containerd: (2m14.648804577s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-179055 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (134.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-124193 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-124193 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (26.606049865s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (265.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.22.0.1845567531.exe start -p stopped-upgrade-914089 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.22.0.1845567531.exe start -p stopped-upgrade-914089 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m16.352127621s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.22.0.1845567531.exe -p stopped-upgrade-914089 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.22.0.1845567531.exe -p stopped-upgrade-914089 stop: (5.351096394s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-914089 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-914089 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m3.35849477s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (265.06s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-124193 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-124193 --alsologtostderr -v=5: (1.021581086s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-124193 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-124193 --output=json --layout=cluster: exit status 2 (262.349559ms)

                                                
                                                
-- stdout --
	{"Name":"pause-124193","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-124193","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-124193 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-124193 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-124193 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-124193 --alsologtostderr -v=5: (1.117580342s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0307 19:05:25.776548   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (36.31544379s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-179055 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-179055 status -o json: exit status 2 (231.320441ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-179055","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-179055
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-179055: (1.543202176s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-179055 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.265995052s)
--- PASS: TestNoKubernetes/serial/Start (29.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-179055 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-179055 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.265608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-179055
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-179055: (1.28739583s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (49.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-179055 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-179055 --driver=kvm2  --container-runtime=containerd: (49.634521486s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (49.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-179055 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-179055 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.25908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-085104 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-085104 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (105.731055ms)

                                                
                                                
-- stdout --
	* [false-085104] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15985
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 19:08:38.973229   32393 out.go:296] Setting OutFile to fd 1 ...
	I0307 19:08:38.973380   32393 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:38.973388   32393 out.go:309] Setting ErrFile to fd 2...
	I0307 19:08:38.973392   32393 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 19:08:38.973496   32393 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
	I0307 19:08:38.974116   32393 out.go:303] Setting JSON to false
	I0307 19:08:38.975024   32393 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6667,"bootTime":1678209452,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0307 19:08:38.975082   32393 start.go:135] virtualization: kvm guest
	I0307 19:08:38.977846   32393 out.go:177] * [false-085104] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0307 19:08:38.979661   32393 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 19:08:38.979619   32393 notify.go:220] Checking for updates...
	I0307 19:08:38.981393   32393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 19:08:38.983109   32393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
	I0307 19:08:38.984647   32393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
	I0307 19:08:38.986151   32393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0307 19:08:38.987653   32393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 19:08:38.989458   32393 config.go:182] Loaded profile config "cert-expiration-949300": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0307 19:08:38.989558   32393 config.go:182] Loaded profile config "running-upgrade-154499": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0307 19:08:38.989628   32393 config.go:182] Loaded profile config "stopped-upgrade-914089": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0307 19:08:38.989669   32393 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 19:08:39.025987   32393 out.go:177] * Using the kvm2 driver based on user configuration
	I0307 19:08:39.027583   32393 start.go:296] selected driver: kvm2
	I0307 19:08:39.027593   32393 start.go:857] validating driver "kvm2" against <nil>
	I0307 19:08:39.027602   32393 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 19:08:39.029727   32393 out.go:177] 
	W0307 19:08:39.031132   32393 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0307 19:08:39.032676   32393 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-085104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-949300
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:26 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.253:8443
name: running-upgrade-154499
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:33 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.90:8443
name: stopped-upgrade-914089
contexts:
- context:
cluster: cert-expiration-949300
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-949300
name: cert-expiration-949300
- context:
cluster: running-upgrade-154499
user: running-upgrade-154499
name: running-upgrade-154499
- context:
cluster: stopped-upgrade-914089
user: stopped-upgrade-914089
name: stopped-upgrade-914089
current-context: stopped-upgrade-914089
kind: Config
preferences: {}
users:
- name: cert-expiration-949300
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.key
- name: running-upgrade-154499
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.key
- name: stopped-upgrade-914089
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-085104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085104"

                                                
                                                
----------------------- debugLogs end: false-085104 [took: 2.830635602s] --------------------------------
helpers_test.go:175: Cleaning up "false-085104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-085104
--- PASS: TestNetworkPlugins/group/false (3.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-914089
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-718947 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-718947 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m33.058023867s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (151.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
E0307 19:09:51.890054   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (2m31.469114948s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (151.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (126.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-882578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
E0307 19:10:25.776496   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-882578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (2m6.12324089s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (126.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-345464 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-345464 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m48.644227064s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-718947 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c643cc24-c5dc-4b11-9cf4-d1af8a18db0c] Pending
helpers_test.go:344: "busybox" [c643cc24-c5dc-4b11-9cf4-d1af8a18db0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c643cc24-c5dc-4b11-9cf4-d1af8a18db0c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.027584305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-718947 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-718947 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-718947 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (102.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-718947 --alsologtostderr -v=3
E0307 19:12:15.577886   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-718947 --alsologtostderr -v=3: (1m42.214709106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (102.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737312 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3bbee55c-f2d5-449f-84c8-835821174d43] Pending
helpers_test.go:344: "busybox" [3bbee55c-f2d5-449f-84c8-835821174d43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3bbee55c-f2d5-449f-84c8-835821174d43] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.02220388s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a7ddd06-2899-45d0-ab1c-fe4261e916d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a7ddd06-2899-45d0-ab1c-fe4261e916d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.032528778s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-882578 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-882578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-882578 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-882578 --alsologtostderr -v=3: (1m32.258626031s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737312 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-737312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-737312 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-737312 --alsologtostderr -v=3: (1m32.041568644s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-345464 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [977c9a46-1cd0-4a5e-bd04-6141f25fdd51] Pending
helpers_test.go:344: "busybox" [977c9a46-1cd0-4a5e-bd04-6141f25fdd51] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [977c9a46-1cd0-4a5e-bd04-6141f25fdd51] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.019719741s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-345464 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-345464 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-345464 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-345464 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-345464 --alsologtostderr -v=3: (1m31.927791056s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-718947 -n old-k8s-version-718947
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-718947 -n old-k8s-version-718947: exit status 7 (63.588251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-718947 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (397.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-718947 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-718947 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m36.884534289s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-718947 -n old-k8s-version-718947
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (397.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882578 -n embed-certs-882578
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882578 -n embed-certs-882578: exit status 7 (63.262635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-882578 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (443.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-882578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-882578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (7m23.64876954s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882578 -n embed-certs-882578
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (443.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737312 -n no-preload-737312
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737312 -n no-preload-737312: exit status 7 (61.692086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-737312 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (607.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (10m6.962916889s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737312 -n no-preload-737312
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (607.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464: exit status 7 (70.430143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-345464 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (716.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-345464 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
E0307 19:15:08.837777   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 19:15:25.776412   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 19:16:48.826281   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 19:17:15.578669   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 19:20:08.838341   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-345464 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (11m55.854192853s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (716.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-h5gvh" [771c399e-df10-462c-bfc5-7b6249559faf] Running
E0307 19:20:25.776782   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015903773s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-h5gvh" [771c399e-df10-462c-bfc5-7b6249559faf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006601914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-718947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-718947 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-718947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-718947 -n old-k8s-version-718947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-718947 -n old-k8s-version-718947: exit status 2 (263.208659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-718947 -n old-k8s-version-718947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-718947 -n old-k8s-version-718947: exit status 2 (247.164862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-718947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-718947 -n old-k8s-version-718947
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-718947 -n old-k8s-version-718947
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-075080 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-075080 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m10.452378946s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-nrhcl" [52b0aa6a-3d66-428e-9d80-a896e5284f60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-nrhcl" [52b0aa6a-3d66-428e-9d80-a896e5284f60] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.016070246s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-nrhcl" [52b0aa6a-3d66-428e-9d80-a896e5284f60] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009953727s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-882578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-075080 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-075080 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-075080 --alsologtostderr -v=3: (3.125721188s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-882578 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-882578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882578 -n embed-certs-882578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882578 -n embed-certs-882578: exit status 2 (263.758189ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-882578 -n embed-certs-882578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-882578 -n embed-certs-882578: exit status 2 (254.987088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-882578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882578 -n embed-certs-882578
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-882578 -n embed-certs-882578
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-075080 -n newest-cni-075080
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-075080 -n newest-cni-075080: exit status 7 (98.029126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-075080 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (75.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-075080 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-075080 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m14.923279751s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-075080 -n newest-cni-075080
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (75.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0307 19:21:53.667227   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:21:53.987861   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:21:54.628425   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:21:55.909224   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:21:58.470222   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:22:03.590739   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:22:13.831519   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
E0307 19:22:15.578051   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 19:22:34.312034   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m34.638258246s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-075080 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-075080 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-075080 -n newest-cni-075080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-075080 -n newest-cni-075080: exit status 2 (288.433118ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-075080 -n newest-cni-075080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-075080 -n newest-cni-075080: exit status 2 (272.58091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-075080 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-075080 -n newest-cni-075080
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-075080 -n newest-cni-075080
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0307 19:23:15.272386   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m20.436431405s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6rcfk" [809ddcf0-75b9-4362-993f-bccc3f49152a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6rcfk" [809ddcf0-75b9-4362-993f-bccc3f49152a] Running
E0307 19:23:38.627071   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.008372051s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m41.320313239s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m8dnj" [c2c62176-ebd8-4ce9-a2c9-99d2b633de7f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017823688s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m8dnj" [c2c62176-ebd8-4ce9-a2c9-99d2b633de7f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011616149s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-737312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-737312 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-737312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737312 -n no-preload-737312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737312 -n no-preload-737312: exit status 2 (287.09836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737312 -n no-preload-737312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737312 -n no-preload-737312: exit status 2 (289.10829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-737312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737312 -n no-preload-737312
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737312 -n no-preload-737312
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m38.626460227s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2g94w" [4bd2bf81-811e-4cb5-872e-c216451cc1c8] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023531464s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-085104 "pgrep -a kubelet"
E0307 19:24:37.192899   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6nhm5" [f89dbdf4-a41b-49d0-b4d4-409d35127bba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6nhm5" [f89dbdf4-a41b-49d0-b4d4-409d35127bba] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010069418s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0307 19:25:08.837706   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 19:25:25.776079   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m49.166828038s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zcr5p" [2afe9aa2-2ece-4dee-a009-8609a8ccdbb3] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020889188s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context calico-085104 replace --force -f testdata/netcat-deployment.yaml: (1.466119647s)
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rllx2" [b2318513-8e82-46c6-9fea-e8abb7b70606] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-rllx2" [b2318513-8e82-46c6-9fea-e8abb7b70606] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013325667s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-zwgmt" [78172a18-d0f1-4021-835e-431d93635da3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-zwgmt" [78172a18-d0f1-4021-835e-431d93635da3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.012481459s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m39.339342826s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0307 19:26:31.890322   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-085104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m33.346892312s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-s4nfp" [922ff7a4-3289-4f06-942d-4b554db78c10] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.256690455s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-s4nfp" [922ff7a4-3289-4f06-942d-4b554db78c10] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.060123599s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-345464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-345464 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-345464 --alsologtostderr -v=1
E0307 19:26:53.349701   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/old-k8s-version-718947/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-345464 --alsologtostderr -v=1: (1.599643827s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464: exit status 2 (288.048093ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464: exit status 2 (255.27664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-345464 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345464 -n default-k8s-diff-port-345464
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6g498" [88978f48-7286-48df-a4c1-b6f61da3a0cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6g498" [88978f48-7286-48df-a4c1-b6f61da3a0cd] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.008207091s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-brffn" [d82aaa51-62fa-466a-8ac4-8fa665e9b28d] Running
E0307 19:27:56.890840   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/no-preload-737312/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016383929s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-kbdwg" [d3bdab9e-c99f-49c3-a264-f147568b0e63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 19:28:01.353982   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.359267   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.369563   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.389840   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.430297   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.510421   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.671301   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:01.991625   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-kbdwg" [d3bdab9e-c99f-49c3-a264-f147568b0e63] Running
E0307 19:28:03.913056   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
E0307 19:28:06.473835   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.010849026s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-085104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-085104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2gwzt" [aac4b51b-5272-41a1-a3ef-21798595543b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 19:28:02.632133   11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/default-k8s-diff-port-345464/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-2gwzt" [aac4b51b-5272-41a1-a3ef-21798595543b] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007399133s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-085104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-085104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (34/297)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-108134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-108134
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-085104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-949300
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:26 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.253:8443
name: running-upgrade-154499
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:33 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.90:8443
name: stopped-upgrade-914089
contexts:
- context:
cluster: cert-expiration-949300
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-949300
name: cert-expiration-949300
- context:
cluster: running-upgrade-154499
user: running-upgrade-154499
name: running-upgrade-154499
- context:
cluster: stopped-upgrade-914089
user: stopped-upgrade-914089
name: stopped-upgrade-914089
current-context: stopped-upgrade-914089
kind: Config
preferences: {}
users:
- name: cert-expiration-949300
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.key
- name: running-upgrade-154499
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.key
- name: stopped-upgrade-914089
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-085104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085104"

                                                
                                                
----------------------- debugLogs end: kubenet-085104 [took: 2.789173332s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-085104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-085104
--- SKIP: TestNetworkPlugins/group/kubenet (3.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-085104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-085104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-949300
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:26 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.253:8443
name: running-upgrade-154499
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:33 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.90:8443
name: stopped-upgrade-914089
contexts:
- context:
cluster: cert-expiration-949300
extensions:
- extension:
last-update: Tue, 07 Mar 2023 19:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-949300
name: cert-expiration-949300
- context:
cluster: running-upgrade-154499
user: running-upgrade-154499
name: running-upgrade-154499
- context:
cluster: stopped-upgrade-914089
user: stopped-upgrade-914089
name: stopped-upgrade-914089
current-context: stopped-upgrade-914089
kind: Config
preferences: {}
users:
- name: cert-expiration-949300
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/cert-expiration-949300/client.key
- name: running-upgrade-154499
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/running-upgrade-154499/client.key
- name: stopped-upgrade-914089
user:
client-certificate: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.crt
client-key: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/stopped-upgrade-914089/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-085104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-085104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085104"

                                                
                                                
----------------------- debugLogs end: cilium-085104 [took: 3.311483469s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-085104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-085104
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
Copied to clipboard